Jul 2 09:04:07.400226 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 09:04:07.400249 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 09:04:07.400258 kernel: KASLR enabled Jul 2 09:04:07.400266 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 09:04:07.400271 kernel: printk: bootconsole [pl11] enabled Jul 2 09:04:07.400277 kernel: efi: EFI v2.7 by EDK II Jul 2 09:04:07.400284 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jul 2 09:04:07.400290 kernel: random: crng init done Jul 2 09:04:07.400296 kernel: ACPI: Early table checksum verification disabled Jul 2 09:04:07.400302 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 09:04:07.400308 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400314 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400321 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 09:04:07.400328 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400335 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400341 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400348 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400355 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400362 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400368 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 09:04:07.400374 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400381 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 09:04:07.400387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 2 09:04:07.400393 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 2 09:04:07.400399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 2 09:04:07.400406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 2 09:04:07.400412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 2 09:04:07.400418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 2 09:04:07.400426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 2 09:04:07.400433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 2 09:04:07.400439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 2 09:04:07.400445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 2 09:04:07.400452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 2 09:04:07.400458 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 2 09:04:07.400464 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jul 2 09:04:07.400470 kernel: Zone ranges: Jul 2 09:04:07.400476 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 09:04:07.400483 kernel: DMA32 empty Jul 2 09:04:07.400489 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 09:04:07.400497 kernel: Movable zone start for each node Jul 2 09:04:07.400506 kernel: Early memory node ranges Jul 2 09:04:07.400513 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 09:04:07.400520 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 09:04:07.400527 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 09:04:07.400536 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 09:04:07.400543 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 09:04:07.400549 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 09:04:07.400556 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 09:04:07.400563 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 09:04:07.400569 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 09:04:07.400576 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 09:04:07.400583 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 09:04:07.400589 kernel: psci: probing for conduit method from ACPI. Jul 2 09:04:07.400596 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 09:04:07.400603 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 09:04:07.400610 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 09:04:07.400618 kernel: psci: SMC Calling Convention v1.4 Jul 2 09:04:07.400625 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 2 09:04:07.400632 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 2 09:04:07.400638 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 09:04:07.400645 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 09:04:07.400652 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 09:04:07.400659 kernel: Detected PIPT I-cache on CPU0 Jul 2 09:04:07.400665 kernel: CPU features: detected: GIC system register CPU interface Jul 2 09:04:07.400672 kernel: CPU features: detected: Hardware dirty bit management Jul 2 09:04:07.400679 kernel: CPU features: detected: Spectre-BHB Jul 2 09:04:07.400685 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 09:04:07.400692 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 09:04:07.400701 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 09:04:07.400708 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 09:04:07.400714 kernel: alternatives: applying boot alternatives Jul 2 09:04:07.400723 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:04:07.400730 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:04:07.400736 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 09:04:07.400743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:04:07.400750 kernel: Fallback order for Node 0: 0 Jul 2 09:04:07.400757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 09:04:07.400763 kernel: Policy zone: Normal Jul 2 09:04:07.400771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:04:07.400778 kernel: software IO TLB: area num 2. Jul 2 09:04:07.400785 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jul 2 09:04:07.400792 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jul 2 09:04:07.400799 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 09:04:07.400805 kernel: trace event string verifier disabled Jul 2 09:04:07.400812 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 09:04:07.400819 kernel: rcu: RCU event tracing is enabled. Jul 2 09:04:07.400826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 09:04:07.400833 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 09:04:07.400840 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:04:07.400846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:04:07.400855 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 09:04:07.400862 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 09:04:07.400868 kernel: GICv3: 960 SPIs implemented Jul 2 09:04:07.400875 kernel: GICv3: 0 Extended SPIs implemented Jul 2 09:04:07.400881 kernel: Root IRQ handler: gic_handle_irq Jul 2 09:04:07.400888 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 09:04:07.400895 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 09:04:07.400902 kernel: ITS: No ITS available, not enabling LPIs Jul 2 09:04:07.400909 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 09:04:07.400916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:04:07.400923 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 09:04:07.400931 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 09:04:07.400939 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 09:04:07.400946 kernel: Console: colour dummy device 80x25 Jul 2 09:04:07.400953 kernel: printk: console [tty1] enabled Jul 2 09:04:07.400960 kernel: ACPI: Core revision 20230628 Jul 2 09:04:07.400967 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 09:04:07.400975 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:04:07.400981 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 09:04:07.400999 kernel: SELinux: Initializing. Jul 2 09:04:07.401007 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401016 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401023 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:04:07.401030 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:04:07.401037 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 09:04:07.401045 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 09:04:07.401051 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 09:04:07.401059 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:04:07.401072 kernel: rcu: Max phase no-delay instances is 400. Jul 2 09:04:07.401080 kernel: Remapping and enabling EFI services. Jul 2 09:04:07.401087 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:04:07.401094 kernel: Detected PIPT I-cache on CPU1 Jul 2 09:04:07.401103 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 09:04:07.401111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:04:07.401118 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 09:04:07.401126 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 09:04:07.401133 kernel: SMP: Total of 2 processors activated. Jul 2 09:04:07.401142 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 09:04:07.401149 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 09:04:07.401157 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 09:04:07.401164 kernel: CPU features: detected: CRC32 instructions Jul 2 09:04:07.401172 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 09:04:07.401179 kernel: CPU features: detected: LSE atomic instructions Jul 2 09:04:07.401187 kernel: CPU features: detected: Privileged Access Never Jul 2 09:04:07.401194 kernel: CPU: All CPU(s) started at EL1 Jul 2 09:04:07.401201 kernel: alternatives: applying system-wide alternatives Jul 2 09:04:07.401210 kernel: devtmpfs: initialized Jul 2 09:04:07.401217 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:04:07.401224 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 09:04:07.401232 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:04:07.401239 kernel: SMBIOS 3.1.0 present. Jul 2 09:04:07.401247 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 09:04:07.401254 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:04:07.401262 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 09:04:07.401270 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 09:04:07.401278 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 09:04:07.401286 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:04:07.401293 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jul 2 09:04:07.401300 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:04:07.401308 kernel: cpuidle: using governor menu Jul 2 09:04:07.401315 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 09:04:07.401322 kernel: ASID allocator initialised with 32768 entries Jul 2 09:04:07.401329 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:04:07.401337 kernel: Serial: AMBA PL011 UART driver Jul 2 09:04:07.401346 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 09:04:07.401353 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 09:04:07.401360 kernel: Modules: 509120 pages in range for PLT usage Jul 2 09:04:07.401368 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:04:07.401375 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 09:04:07.401382 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 09:04:07.401389 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 09:04:07.401397 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:04:07.401404 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 09:04:07.401413 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 09:04:07.401420 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 09:04:07.401428 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:04:07.401435 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:04:07.401442 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:04:07.401449 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:04:07.401457 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 09:04:07.401464 kernel: ACPI: Interpreter enabled Jul 2 09:04:07.401471 kernel: ACPI: Using GIC for interrupt routing Jul 2 09:04:07.401480 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 09:04:07.401487 kernel: printk: console [ttyAMA0] enabled Jul 2 09:04:07.401495 kernel: printk: bootconsole [pl11] disabled Jul 2 09:04:07.401502 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 09:04:07.401509 kernel: iommu: Default domain type: Translated Jul 2 09:04:07.401516 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 09:04:07.401523 kernel: efivars: Registered efivars operations Jul 2 09:04:07.401530 kernel: vgaarb: loaded Jul 2 09:04:07.401537 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 09:04:07.401546 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:04:07.401554 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:04:07.401562 kernel: pnp: PnP ACPI init Jul 2 09:04:07.401569 kernel: pnp: PnP ACPI: found 0 devices Jul 2 09:04:07.401576 kernel: NET: Registered PF_INET protocol family Jul 2 09:04:07.401583 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:04:07.401591 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 09:04:07.401598 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:04:07.401605 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 09:04:07.401614 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 09:04:07.401622 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 09:04:07.401629 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401636 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401643 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:04:07.401651 kernel: PCI: CLS 0 bytes, default 64 Jul 2 09:04:07.401658 kernel: kvm [1]: HYP mode not available Jul 2 09:04:07.401665 kernel: Initialise system trusted keyrings Jul 2 09:04:07.401672 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 09:04:07.401682 kernel: Key type asymmetric registered Jul 2 09:04:07.401689 kernel: Asymmetric key parser 'x509' registered Jul 2 09:04:07.401696 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 09:04:07.401703 kernel: io scheduler mq-deadline registered Jul 2 09:04:07.401711 kernel: io scheduler kyber registered Jul 2 09:04:07.401718 kernel: io scheduler bfq registered Jul 2 09:04:07.401725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:04:07.401732 kernel: thunder_xcv, ver 1.0 Jul 2 09:04:07.401739 kernel: thunder_bgx, ver 1.0 Jul 2 09:04:07.401747 kernel: nicpf, ver 1.0 Jul 2 09:04:07.401756 kernel: nicvf, ver 1.0 Jul 2 09:04:07.401897 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 09:04:07.401970 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T09:04:06 UTC (1719911046) Jul 2 09:04:07.401980 kernel: efifb: probing for efifb Jul 2 09:04:07.401999 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 09:04:07.402007 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 09:04:07.402014 kernel: efifb: scrolling: redraw Jul 2 09:04:07.402024 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 09:04:07.402032 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 09:04:07.402039 kernel: fb0: EFI VGA frame buffer device Jul 2 09:04:07.402046 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 09:04:07.402054 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:04:07.402061 kernel: No ACPI PMU IRQ for CPU0 Jul 2 09:04:07.402068 kernel: No ACPI PMU IRQ for CPU1 Jul 2 09:04:07.402075 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 09:04:07.402083 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 09:04:07.402092 kernel: watchdog: Hard watchdog permanently disabled Jul 2 09:04:07.402099 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:04:07.402106 kernel: Segment Routing with IPv6 Jul 2 09:04:07.402114 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:04:07.402121 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:04:07.402128 kernel: Key type dns_resolver registered Jul 2 09:04:07.402135 kernel: registered taskstats version 1 Jul 2 09:04:07.402142 kernel: Loading compiled-in X.509 certificates Jul 2 09:04:07.402150 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 09:04:07.402159 kernel: Key type .fscrypt registered Jul 2 09:04:07.402166 kernel: Key type fscrypt-provisioning registered Jul 2 09:04:07.402174 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 09:04:07.402181 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:04:07.402188 kernel: ima: No architecture policies found Jul 2 09:04:07.402196 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 09:04:07.402203 kernel: clk: Disabling unused clocks Jul 2 09:04:07.402210 kernel: Freeing unused kernel memory: 39040K Jul 2 09:04:07.402217 kernel: Run /init as init process Jul 2 09:04:07.402226 kernel: with arguments: Jul 2 09:04:07.402233 kernel: /init Jul 2 09:04:07.402240 kernel: with environment: Jul 2 09:04:07.402247 kernel: HOME=/ Jul 2 09:04:07.402254 kernel: TERM=linux Jul 2 09:04:07.402261 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:04:07.402270 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:04:07.402279 systemd[1]: Detected virtualization microsoft. Jul 2 09:04:07.402289 systemd[1]: Detected architecture arm64. Jul 2 09:04:07.402297 systemd[1]: Running in initrd. Jul 2 09:04:07.402304 systemd[1]: No hostname configured, using default hostname. Jul 2 09:04:07.402312 systemd[1]: Hostname set to . Jul 2 09:04:07.402320 systemd[1]: Initializing machine ID from random generator. Jul 2 09:04:07.402328 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:04:07.402336 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:04:07.402343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:04:07.402354 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 09:04:07.402362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:04:07.402370 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 09:04:07.402378 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 09:04:07.402387 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 09:04:07.402396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 09:04:07.402403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:04:07.402413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:04:07.402421 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:04:07.402429 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:04:07.402437 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:04:07.402444 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:04:07.402452 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:04:07.402460 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:04:07.402468 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:04:07.402478 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:04:07.402486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:04:07.402494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:04:07.402502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:04:07.402510 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:04:07.402518 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 09:04:07.402526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:04:07.402534 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 09:04:07.402542 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:04:07.402552 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:04:07.402560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:04:07.402590 systemd-journald[217]: Collecting audit messages is disabled. Jul 2 09:04:07.402611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:07.402622 systemd-journald[217]: Journal started Jul 2 09:04:07.402641 systemd-journald[217]: Runtime Journal (/run/log/journal/e6aa7933201a40c2b9493ffdadfc546e) is 8.0M, max 78.6M, 70.6M free. Jul 2 09:04:07.412385 systemd-modules-load[218]: Inserted module 'overlay' Jul 2 09:04:07.441326 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:04:07.441407 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:04:07.451472 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 2 09:04:07.458394 kernel: Bridge firewalling registered Jul 2 09:04:07.453977 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 09:04:07.475033 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:04:07.482580 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:04:07.493640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:04:07.503831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:07.531327 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:04:07.544191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:04:07.558244 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:04:07.595168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:04:07.604022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:07.619840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:04:07.626562 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:04:07.644289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:04:07.680260 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 09:04:07.695327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:04:07.708225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:04:07.719488 dracut-cmdline[250]: dracut-dracut-053 Jul 2 09:04:07.719488 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:04:07.770052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:04:07.786391 systemd-resolved[256]: Positive Trust Anchors: Jul 2 09:04:07.786402 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:04:07.786433 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:04:07.788741 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 2 09:04:07.790241 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:04:07.798563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:04:07.928021 kernel: SCSI subsystem initialized Jul 2 09:04:07.936017 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:04:07.947006 kernel: iscsi: registered transport (tcp) Jul 2 09:04:07.966001 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:04:07.966052 kernel: QLogic iSCSI HBA Driver Jul 2 09:04:08.007887 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 09:04:08.022353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 09:04:08.056821 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:04:08.056919 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:04:08.064410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 09:04:08.116019 kernel: raid6: neonx8 gen() 15716 MB/s Jul 2 09:04:08.136000 kernel: raid6: neonx4 gen() 15667 MB/s Jul 2 09:04:08.155999 kernel: raid6: neonx2 gen() 13240 MB/s Jul 2 09:04:08.176999 kernel: raid6: neonx1 gen() 10461 MB/s Jul 2 09:04:08.196998 kernel: raid6: int64x8 gen() 6960 MB/s Jul 2 09:04:08.217011 kernel: raid6: int64x4 gen() 7347 MB/s Jul 2 09:04:08.238007 kernel: raid6: int64x2 gen() 6131 MB/s Jul 2 09:04:08.262718 kernel: raid6: int64x1 gen() 5058 MB/s Jul 2 09:04:08.262736 kernel: raid6: using algorithm neonx8 gen() 15716 MB/s Jul 2 09:04:08.287248 kernel: raid6: .... xor() 12024 MB/s, rmw enabled Jul 2 09:04:08.287260 kernel: raid6: using neon recovery algorithm Jul 2 09:04:08.299179 kernel: xor: measuring software checksum speed Jul 2 09:04:08.299197 kernel: 8regs : 19864 MB/sec Jul 2 09:04:08.303207 kernel: 32regs : 19725 MB/sec Jul 2 09:04:08.307060 kernel: arm64_neon : 27134 MB/sec Jul 2 09:04:08.311140 kernel: xor: using function: arm64_neon (27134 MB/sec) Jul 2 09:04:08.365018 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 09:04:08.374573 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:04:08.391285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:04:08.415184 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jul 2 09:04:08.420967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:04:08.441165 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 09:04:08.465439 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jul 2 09:04:08.498721 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:04:08.518349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:04:08.561089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:04:08.583253 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 09:04:08.615370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 09:04:08.631878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:04:08.649069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:04:08.666315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:04:08.685024 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 09:04:08.686172 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 09:04:08.720539 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 09:04:08.720608 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 09:04:08.721007 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 09:04:08.726947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:04:08.727145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:08.762342 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 09:04:08.762398 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 09:04:08.762409 kernel: PTP clock support registered Jul 2 09:04:08.762419 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 09:04:08.774555 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:04:08.818168 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 09:04:08.818203 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 09:04:08.818382 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 09:04:08.804106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:04:08.830998 kernel: scsi host0: storvsc_host_t Jul 2 09:04:08.831202 kernel: scsi host1: storvsc_host_t Jul 2 09:04:08.831226 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 09:04:08.804351 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:08.870060 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 09:04:08.870085 kernel: hv_vmbus: registering driver hv_utils Jul 2 09:04:08.870095 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 09:04:08.870105 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 09:04:08.870122 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 09:04:08.870294 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 09:04:08.827922 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:08.377390 systemd-resolved[256]: Clock change detected. Flushing caches. Jul 2 09:04:08.403433 systemd-journald[217]: Time jumped backwards, rotating. Jul 2 09:04:08.394843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:08.430559 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 09:04:08.463440 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 09:04:08.463482 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 09:04:08.488692 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 09:04:08.488820 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 09:04:08.488905 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 09:04:08.488997 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: VF slot 1 added Jul 2 09:04:08.489101 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 09:04:08.489188 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 09:04:08.489271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:08.489280 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 09:04:08.417135 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:04:08.472561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:08.523969 kernel: hv_vmbus: registering driver hv_pci Jul 2 09:04:08.523990 kernel: hv_pci bdc93c64-c0f7-439e-8025-446bcfe02f12: PCI VMBus probing: Using version 0x10004 Jul 2 09:04:08.612388 kernel: hv_pci bdc93c64-c0f7-439e-8025-446bcfe02f12: PCI host bridge to bus c0f7:00 Jul 2 09:04:08.612527 kernel: pci_bus c0f7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 09:04:08.612632 kernel: pci_bus c0f7:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 09:04:08.612709 kernel: pci c0f7:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 09:04:08.612813 kernel: pci c0f7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 09:04:08.612899 kernel: pci c0f7:00:02.0: enabling Extended Tags Jul 2 09:04:08.612983 kernel: pci c0f7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c0f7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 09:04:08.613066 kernel: pci_bus c0f7:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 09:04:08.613143 kernel: pci c0f7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 09:04:08.489624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:04:08.548604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:08.659750 kernel: mlx5_core c0f7:00:02.0: enabling device (0000 -> 0002) Jul 2 09:04:08.890744 kernel: mlx5_core c0f7:00:02.0: firmware version: 16.30.1284 Jul 2 09:04:08.890892 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: VF registering: eth1 Jul 2 09:04:08.890983 kernel: mlx5_core c0f7:00:02.0 eth1: joined to eth0 Jul 2 09:04:08.891138 kernel: mlx5_core c0f7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 2 09:04:08.899391 kernel: mlx5_core c0f7:00:02.0 enP49399s1: renamed from eth1 Jul 2 09:04:09.131558 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 09:04:09.432388 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (487) Jul 2 09:04:09.447889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 09:04:09.483832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 09:04:09.558549 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (488) Jul 2 09:04:09.572568 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 09:04:09.580336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 09:04:09.612625 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 09:04:09.640402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:09.648387 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:10.656460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:10.657319 disk-uuid[599]: The operation has completed successfully. Jul 2 09:04:10.720604 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:04:10.722791 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 09:04:10.762560 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 09:04:10.776319 sh[685]: Success Jul 2 09:04:10.806402 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 09:04:10.990118 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 09:04:11.013022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 09:04:11.024095 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 09:04:11.052645 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 09:04:11.052711 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:11.052732 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 09:04:11.064488 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 09:04:11.069046 kernel: BTRFS info (device dm-0): using free space tree Jul 2 09:04:11.349830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 09:04:11.355533 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 09:04:11.374645 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 09:04:11.382519 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 09:04:11.415334 kernel: BTRFS info (device sda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:11.415399 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:11.419978 kernel: BTRFS info (device sda6): using free space tree Jul 2 09:04:11.454415 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 09:04:11.464082 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 09:04:11.476289 kernel: BTRFS info (device sda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:11.484944 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 09:04:11.499588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 09:04:11.546415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:04:11.565517 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:04:11.592769 systemd-networkd[869]: lo: Link UP Jul 2 09:04:11.592783 systemd-networkd[869]: lo: Gained carrier Jul 2 09:04:11.594348 systemd-networkd[869]: Enumeration completed Jul 2 09:04:11.596527 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:04:11.597019 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:04:11.597023 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:04:11.603513 systemd[1]: Reached target network.target - Network. Jul 2 09:04:11.683373 kernel: mlx5_core c0f7:00:02.0 enP49399s1: Link up Jul 2 09:04:11.729387 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: Data path switched to VF: enP49399s1 Jul 2 09:04:11.729676 systemd-networkd[869]: enP49399s1: Link UP Jul 2 09:04:11.729780 systemd-networkd[869]: eth0: Link UP Jul 2 09:04:11.729915 systemd-networkd[869]: eth0: Gained carrier Jul 2 09:04:11.729925 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:04:11.741403 systemd-networkd[869]: enP49399s1: Gained carrier Jul 2 09:04:11.767405 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 09:04:12.320936 ignition[821]: Ignition 2.18.0 Jul 2 09:04:12.320953 ignition[821]: Stage: fetch-offline Jul 2 09:04:12.320990 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.328255 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:04:12.320998 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.321087 ignition[821]: parsed url from cmdline: "" Jul 2 09:04:12.321090 ignition[821]: no config URL provided Jul 2 09:04:12.321094 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:04:12.321101 ignition[821]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:04:12.356679 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 09:04:12.321106 ignition[821]: failed to fetch config: resource requires networking Jul 2 09:04:12.321290 ignition[821]: Ignition finished successfully Jul 2 09:04:12.375773 ignition[879]: Ignition 2.18.0 Jul 2 09:04:12.375785 ignition[879]: Stage: fetch Jul 2 09:04:12.376057 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.376070 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.376208 ignition[879]: parsed url from cmdline: "" Jul 2 09:04:12.376212 ignition[879]: no config URL provided Jul 2 09:04:12.376218 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:04:12.376229 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:04:12.376254 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 09:04:12.477623 ignition[879]: GET result: OK Jul 2 09:04:12.478292 ignition[879]: config has been read from IMDS userdata Jul 2 09:04:12.478348 ignition[879]: parsing config with SHA512: 4bdca900d1a2aa8f8ad8fe2c606cf830a1d3dfb214eedce06248f45462d67b90d948527e7bf2735a360d5893e7559777b2174d43c9e5da1b6cd40e3c2d3bbdce Jul 2 09:04:12.482552 unknown[879]: fetched base config from "system" Jul 2 09:04:12.483009 ignition[879]: fetch: fetch complete Jul 2 09:04:12.482560 unknown[879]: fetched base config from "system" Jul 2 09:04:12.483014 ignition[879]: fetch: fetch passed Jul 2 09:04:12.482565 unknown[879]: fetched user config from "azure" Jul 2 09:04:12.483070 ignition[879]: Ignition finished successfully Jul 2 09:04:12.486978 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 09:04:12.510526 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 09:04:12.528853 ignition[886]: Ignition 2.18.0 Jul 2 09:04:12.528868 ignition[886]: Stage: kargs Jul 2 09:04:12.538570 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 09:04:12.529067 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.529077 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.556945 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 09:04:12.530140 ignition[886]: kargs: kargs passed Jul 2 09:04:12.530199 ignition[886]: Ignition finished successfully Jul 2 09:04:12.583175 ignition[894]: Ignition 2.18.0 Jul 2 09:04:12.583183 ignition[894]: Stage: disks Jul 2 09:04:12.588933 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 09:04:12.583653 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.597840 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 09:04:12.583667 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.610719 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:04:12.584906 ignition[894]: disks: disks passed Jul 2 09:04:12.624115 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:04:12.584988 ignition[894]: Ignition finished successfully Jul 2 09:04:12.637505 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:04:12.650293 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:04:12.682731 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 09:04:12.751433 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 09:04:12.762456 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 09:04:12.779614 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 09:04:12.836382 kernel: EXT4-fs (sda9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 09:04:12.837015 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 09:04:12.847729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 09:04:12.893451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:04:12.904436 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 09:04:12.912582 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 09:04:12.940151 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Jul 2 09:04:12.948525 systemd-networkd[869]: enP49399s1: Gained IPv6LL Jul 2 09:04:12.966923 kernel: BTRFS info (device sda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:12.966949 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:12.966959 kernel: BTRFS info (device sda6): using free space tree Jul 2 09:04:12.959717 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 09:04:12.959795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:04:12.987034 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 09:04:13.018180 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 09:04:13.006679 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 09:04:13.010904 systemd-networkd[869]: eth0: Gained IPv6LL Jul 2 09:04:13.025434 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:04:13.627664 coreos-metadata[917]: Jul 02 09:04:13.627 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 09:04:13.638124 coreos-metadata[917]: Jul 02 09:04:13.638 INFO Fetch successful Jul 2 09:04:13.643644 coreos-metadata[917]: Jul 02 09:04:13.643 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 09:04:13.667863 coreos-metadata[917]: Jul 02 09:04:13.667 INFO Fetch successful Jul 2 09:04:13.673860 coreos-metadata[917]: Jul 02 09:04:13.669 INFO wrote hostname ci-3975.1.1-a-59f2e70dce to /sysroot/etc/hostname Jul 2 09:04:13.674540 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 09:04:14.317882 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 09:04:14.340933 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jul 2 09:04:14.365217 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 09:04:14.390003 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 09:04:15.210395 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 09:04:15.226621 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 09:04:15.240873 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 09:04:15.254926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 09:04:15.270073 kernel: BTRFS info (device sda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:15.290127 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 09:04:15.308090 ignition[1034]: INFO : Ignition 2.18.0 Jul 2 09:04:15.312535 ignition[1034]: INFO : Stage: mount Jul 2 09:04:15.312535 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:15.312535 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:15.312535 ignition[1034]: INFO : mount: mount passed Jul 2 09:04:15.312535 ignition[1034]: INFO : Ignition finished successfully Jul 2 09:04:15.316766 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 09:04:15.346591 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 09:04:15.366085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:04:15.390377 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1046) Jul 2 09:04:15.407371 kernel: BTRFS info (device sda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:15.407423 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:15.407434 kernel: BTRFS info (device sda6): using free space tree Jul 2 09:04:15.419374 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 09:04:15.419678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:04:15.447584 ignition[1063]: INFO : Ignition 2.18.0 Jul 2 09:04:15.447584 ignition[1063]: INFO : Stage: files Jul 2 09:04:15.456096 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:15.456096 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:15.456096 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jul 2 09:04:15.475243 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 09:04:15.475243 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 09:04:15.523596 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 09:04:15.532286 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 09:04:15.532286 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 09:04:15.523997 unknown[1063]: wrote ssh authorized keys file for user: core Jul 2 09:04:15.558180 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:04:15.570336 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 09:04:15.871418 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 09:04:16.079087 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 09:04:16.520060 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 09:04:17.024767 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:17.024767 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: files passed Jul 2 09:04:17.044667 ignition[1063]: INFO : Ignition finished successfully Jul 2 09:04:17.055345 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 09:04:17.088711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 09:04:17.097569 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 09:04:17.185451 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:04:17.185451 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:04:17.132363 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 09:04:17.215521 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:04:17.132468 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 09:04:17.141600 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:04:17.152871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 09:04:17.176639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 09:04:17.226644 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:04:17.228392 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 09:04:17.238848 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 09:04:17.250889 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 09:04:17.262609 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 09:04:17.275649 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 09:04:17.303650 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:04:17.335673 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 09:04:17.352703 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:04:17.352940 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 09:04:17.366475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:04:17.380778 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:04:17.394787 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 09:04:17.407986 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:04:17.408056 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:04:17.426701 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 09:04:17.444765 systemd[1]: Stopped target basic.target - Basic System. Jul 2 09:04:17.455413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 09:04:17.467623 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:04:17.480400 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 09:04:17.492922 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 09:04:17.504743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:04:17.518234 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 09:04:17.531575 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 09:04:17.542489 systemd[1]: Stopped target swap.target - Swaps. Jul 2 09:04:17.552109 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:04:17.552191 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:04:17.569035 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:04:17.575610 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:04:17.587810 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 09:04:17.587856 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:04:17.600681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:04:17.600757 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 09:04:17.619976 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:04:17.620033 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:04:17.634248 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 09:04:17.634298 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 09:04:17.644926 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 09:04:17.644974 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 09:04:17.711690 ignition[1117]: INFO : Ignition 2.18.0 Jul 2 09:04:17.711690 ignition[1117]: INFO : Stage: umount Jul 2 09:04:17.711690 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:17.711690 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:17.711690 ignition[1117]: INFO : umount: umount passed Jul 2 09:04:17.711690 ignition[1117]: INFO : Ignition finished successfully Jul 2 09:04:17.678614 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 09:04:17.706081 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 09:04:17.720344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:04:17.720441 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:04:17.732384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:04:17.732451 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:04:17.745730 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 09:04:17.745850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 09:04:17.755905 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 09:04:17.755966 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 09:04:17.767526 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 09:04:17.767595 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 09:04:17.774805 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 09:04:17.774860 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 09:04:17.791851 systemd[1]: Stopped target network.target - Network. Jul 2 09:04:17.803805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 09:04:17.803887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:04:17.817600 systemd[1]: Stopped target paths.target - Path Units. Jul 2 09:04:17.829042 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:04:17.839387 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:04:17.847475 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 09:04:17.857577 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 09:04:17.870542 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:04:17.870595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:04:17.879484 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:04:17.879540 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:04:17.887392 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 09:04:17.887453 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 09:04:17.893634 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 09:04:17.893686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 09:04:17.905045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 09:04:17.917637 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 09:04:17.929856 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 09:04:17.934470 systemd-networkd[869]: eth0: DHCPv6 lease lost Jul 2 09:04:17.934730 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 09:04:17.934845 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 09:04:18.179896 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: Data path switched from VF: enP49399s1 Jul 2 09:04:17.944994 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 09:04:17.945110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:04:17.955061 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 09:04:17.955195 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 09:04:17.970507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 09:04:17.970557 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:04:18.003628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 09:04:18.013570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 09:04:18.013692 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:04:18.029814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:04:18.029889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:04:18.040874 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:04:18.040939 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 09:04:18.047271 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:04:18.094057 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:04:18.094228 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:04:18.107753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:04:18.107813 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 09:04:18.130090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:04:18.130148 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:04:18.140980 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:04:18.141042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:04:18.166859 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:04:18.166922 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 09:04:18.179702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:04:18.179763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:18.206542 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 09:04:18.224544 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:04:18.224621 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:04:18.453942 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 2 09:04:18.238243 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 09:04:18.238309 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:04:18.251493 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:04:18.251555 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:04:18.270504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:04:18.270567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:18.278979 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 09:04:18.279092 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 09:04:18.291716 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 09:04:18.293889 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 09:04:18.302080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:04:18.302165 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 09:04:18.317529 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 09:04:18.328106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:04:18.328210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 09:04:18.355531 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 09:04:18.372299 systemd[1]: Switching root. Jul 2 09:04:18.557700 systemd-journald[217]: Journal stopped Jul 2 09:04:07.400226 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 09:04:07.400249 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 09:04:07.400258 kernel: KASLR enabled Jul 2 09:04:07.400266 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 09:04:07.400271 kernel: printk: bootconsole [pl11] enabled Jul 2 09:04:07.400277 kernel: efi: EFI v2.7 by EDK II Jul 2 09:04:07.400284 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jul 2 09:04:07.400290 kernel: random: crng init done Jul 2 09:04:07.400296 kernel: ACPI: Early table checksum verification disabled Jul 2 09:04:07.400302 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 09:04:07.400308 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400314 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400321 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 09:04:07.400328 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400335 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400341 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400348 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400355 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400362 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400368 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 09:04:07.400374 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 09:04:07.400381 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 09:04:07.400387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 2 09:04:07.400393 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 2 09:04:07.400399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 2 09:04:07.400406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 2 09:04:07.400412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 2 09:04:07.400418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 2 09:04:07.400426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 2 09:04:07.400433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 2 09:04:07.400439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 2 09:04:07.400445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 2 09:04:07.400452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 2 09:04:07.400458 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 2 09:04:07.400464 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jul 2 09:04:07.400470 kernel: Zone ranges: Jul 2 09:04:07.400476 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 09:04:07.400483 kernel: DMA32 empty Jul 2 09:04:07.400489 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 09:04:07.400497 kernel: Movable zone start for each node Jul 2 09:04:07.400506 kernel: Early memory node ranges Jul 2 09:04:07.400513 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 09:04:07.400520 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 09:04:07.400527 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 09:04:07.400536 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 09:04:07.400543 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 09:04:07.400549 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 09:04:07.400556 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 09:04:07.400563 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 09:04:07.400569 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 09:04:07.400576 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 09:04:07.400583 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 09:04:07.400589 kernel: psci: probing for conduit method from ACPI. Jul 2 09:04:07.400596 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 09:04:07.400603 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 09:04:07.400610 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 09:04:07.400618 kernel: psci: SMC Calling Convention v1.4 Jul 2 09:04:07.400625 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 2 09:04:07.400632 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 2 09:04:07.400638 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 09:04:07.400645 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 09:04:07.400652 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 09:04:07.400659 kernel: Detected PIPT I-cache on CPU0 Jul 2 09:04:07.400665 kernel: CPU features: detected: GIC system register CPU interface Jul 2 09:04:07.400672 kernel: CPU features: detected: Hardware dirty bit management Jul 2 09:04:07.400679 kernel: CPU features: detected: Spectre-BHB Jul 2 09:04:07.400685 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 09:04:07.400692 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 09:04:07.400701 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 09:04:07.400708 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 09:04:07.400714 kernel: alternatives: applying boot alternatives Jul 2 09:04:07.400723 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:04:07.400730 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:04:07.400736 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 09:04:07.400743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:04:07.400750 kernel: Fallback order for Node 0: 0 Jul 2 09:04:07.400757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 09:04:07.400763 kernel: Policy zone: Normal Jul 2 09:04:07.400771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:04:07.400778 kernel: software IO TLB: area num 2. Jul 2 09:04:07.400785 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jul 2 09:04:07.400792 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jul 2 09:04:07.400799 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 09:04:07.400805 kernel: trace event string verifier disabled Jul 2 09:04:07.400812 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 09:04:07.400819 kernel: rcu: RCU event tracing is enabled. Jul 2 09:04:07.400826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 09:04:07.400833 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 09:04:07.400840 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:04:07.400846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:04:07.400855 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 09:04:07.400862 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 09:04:07.400868 kernel: GICv3: 960 SPIs implemented Jul 2 09:04:07.400875 kernel: GICv3: 0 Extended SPIs implemented Jul 2 09:04:07.400881 kernel: Root IRQ handler: gic_handle_irq Jul 2 09:04:07.400888 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 09:04:07.400895 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 09:04:07.400902 kernel: ITS: No ITS available, not enabling LPIs Jul 2 09:04:07.400909 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 09:04:07.400916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:04:07.400923 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 09:04:07.400931 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 09:04:07.400939 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 09:04:07.400946 kernel: Console: colour dummy device 80x25 Jul 2 09:04:07.400953 kernel: printk: console [tty1] enabled Jul 2 09:04:07.400960 kernel: ACPI: Core revision 20230628 Jul 2 09:04:07.400967 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 09:04:07.400975 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:04:07.400981 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 09:04:07.400999 kernel: SELinux: Initializing. Jul 2 09:04:07.401007 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401016 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401023 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:04:07.401030 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:04:07.401037 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 09:04:07.401045 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 09:04:07.401051 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 09:04:07.401059 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:04:07.401072 kernel: rcu: Max phase no-delay instances is 400. Jul 2 09:04:07.401080 kernel: Remapping and enabling EFI services. Jul 2 09:04:07.401087 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:04:07.401094 kernel: Detected PIPT I-cache on CPU1 Jul 2 09:04:07.401103 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 09:04:07.401111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:04:07.401118 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 09:04:07.401126 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 09:04:07.401133 kernel: SMP: Total of 2 processors activated. Jul 2 09:04:07.401142 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 09:04:07.401149 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 09:04:07.401157 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 09:04:07.401164 kernel: CPU features: detected: CRC32 instructions Jul 2 09:04:07.401172 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 09:04:07.401179 kernel: CPU features: detected: LSE atomic instructions Jul 2 09:04:07.401187 kernel: CPU features: detected: Privileged Access Never Jul 2 09:04:07.401194 kernel: CPU: All CPU(s) started at EL1 Jul 2 09:04:07.401201 kernel: alternatives: applying system-wide alternatives Jul 2 09:04:07.401210 kernel: devtmpfs: initialized Jul 2 09:04:07.401217 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:04:07.401224 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 09:04:07.401232 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:04:07.401239 kernel: SMBIOS 3.1.0 present. Jul 2 09:04:07.401247 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 09:04:07.401254 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:04:07.401262 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 09:04:07.401270 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 09:04:07.401278 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 09:04:07.401286 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:04:07.401293 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jul 2 09:04:07.401300 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:04:07.401308 kernel: cpuidle: using governor menu Jul 2 09:04:07.401315 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 09:04:07.401322 kernel: ASID allocator initialised with 32768 entries Jul 2 09:04:07.401329 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:04:07.401337 kernel: Serial: AMBA PL011 UART driver Jul 2 09:04:07.401346 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 09:04:07.401353 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 09:04:07.401360 kernel: Modules: 509120 pages in range for PLT usage Jul 2 09:04:07.401368 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:04:07.401375 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 09:04:07.401382 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 09:04:07.401389 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 09:04:07.401397 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:04:07.401404 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 09:04:07.401413 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 09:04:07.401420 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 09:04:07.401428 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:04:07.401435 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:04:07.401442 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:04:07.401449 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:04:07.401457 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 09:04:07.401464 kernel: ACPI: Interpreter enabled Jul 2 09:04:07.401471 kernel: ACPI: Using GIC for interrupt routing Jul 2 09:04:07.401480 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 09:04:07.401487 kernel: printk: console [ttyAMA0] enabled Jul 2 09:04:07.401495 kernel: printk: bootconsole [pl11] disabled Jul 2 09:04:07.401502 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 09:04:07.401509 kernel: iommu: Default domain type: Translated Jul 2 09:04:07.401516 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 09:04:07.401523 kernel: efivars: Registered efivars operations Jul 2 09:04:07.401530 kernel: vgaarb: loaded Jul 2 09:04:07.401537 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 09:04:07.401546 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:04:07.401554 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:04:07.401562 kernel: pnp: PnP ACPI init Jul 2 09:04:07.401569 kernel: pnp: PnP ACPI: found 0 devices Jul 2 09:04:07.401576 kernel: NET: Registered PF_INET protocol family Jul 2 09:04:07.401583 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:04:07.401591 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 09:04:07.401598 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:04:07.401605 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 09:04:07.401614 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 09:04:07.401622 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 09:04:07.401629 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401636 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:04:07.401643 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:04:07.401651 kernel: PCI: CLS 0 bytes, default 64 Jul 2 09:04:07.401658 kernel: kvm [1]: HYP mode not available Jul 2 09:04:07.401665 kernel: Initialise system trusted keyrings Jul 2 09:04:07.401672 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 09:04:07.401682 kernel: Key type asymmetric registered Jul 2 09:04:07.401689 kernel: Asymmetric key parser 'x509' registered Jul 2 09:04:07.401696 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 09:04:07.401703 kernel: io scheduler mq-deadline registered Jul 2 09:04:07.401711 kernel: io scheduler kyber registered Jul 2 09:04:07.401718 kernel: io scheduler bfq registered Jul 2 09:04:07.401725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:04:07.401732 kernel: thunder_xcv, ver 1.0 Jul 2 09:04:07.401739 kernel: thunder_bgx, ver 1.0 Jul 2 09:04:07.401747 kernel: nicpf, ver 1.0 Jul 2 09:04:07.401756 kernel: nicvf, ver 1.0 Jul 2 09:04:07.401897 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 09:04:07.401970 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T09:04:06 UTC (1719911046) Jul 2 09:04:07.401980 kernel: efifb: probing for efifb Jul 2 09:04:07.401999 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 09:04:07.402007 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 09:04:07.402014 kernel: efifb: scrolling: redraw Jul 2 09:04:07.402024 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 09:04:07.402032 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 09:04:07.402039 kernel: fb0: EFI VGA frame buffer device Jul 2 09:04:07.402046 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 09:04:07.402054 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:04:07.402061 kernel: No ACPI PMU IRQ for CPU0 Jul 2 09:04:07.402068 kernel: No ACPI PMU IRQ for CPU1 Jul 2 09:04:07.402075 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 09:04:07.402083 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 09:04:07.402092 kernel: watchdog: Hard watchdog permanently disabled Jul 2 09:04:07.402099 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:04:07.402106 kernel: Segment Routing with IPv6 Jul 2 09:04:07.402114 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:04:07.402121 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:04:07.402128 kernel: Key type dns_resolver registered Jul 2 09:04:07.402135 kernel: registered taskstats version 1 Jul 2 09:04:07.402142 kernel: Loading compiled-in X.509 certificates Jul 2 09:04:07.402150 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 09:04:07.402159 kernel: Key type .fscrypt registered Jul 2 09:04:07.402166 kernel: Key type fscrypt-provisioning registered Jul 2 09:04:07.402174 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 09:04:07.402181 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:04:07.402188 kernel: ima: No architecture policies found Jul 2 09:04:07.402196 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 09:04:07.402203 kernel: clk: Disabling unused clocks Jul 2 09:04:07.402210 kernel: Freeing unused kernel memory: 39040K Jul 2 09:04:07.402217 kernel: Run /init as init process Jul 2 09:04:07.402226 kernel: with arguments: Jul 2 09:04:07.402233 kernel: /init Jul 2 09:04:07.402240 kernel: with environment: Jul 2 09:04:07.402247 kernel: HOME=/ Jul 2 09:04:07.402254 kernel: TERM=linux Jul 2 09:04:07.402261 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:04:07.402270 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:04:07.402279 systemd[1]: Detected virtualization microsoft. Jul 2 09:04:07.402289 systemd[1]: Detected architecture arm64. Jul 2 09:04:07.402297 systemd[1]: Running in initrd. Jul 2 09:04:07.402304 systemd[1]: No hostname configured, using default hostname. Jul 2 09:04:07.402312 systemd[1]: Hostname set to . Jul 2 09:04:07.402320 systemd[1]: Initializing machine ID from random generator. Jul 2 09:04:07.402328 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:04:07.402336 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:04:07.402343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:04:07.402354 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 09:04:07.402362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:04:07.402370 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 09:04:07.402378 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 09:04:07.402387 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 09:04:07.402396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 09:04:07.402403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:04:07.402413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:04:07.402421 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:04:07.402429 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:04:07.402437 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:04:07.402444 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:04:07.402452 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:04:07.402460 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:04:07.402468 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:04:07.402478 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:04:07.402486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:04:07.402494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:04:07.402502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:04:07.402510 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:04:07.402518 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 09:04:07.402526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:04:07.402534 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 09:04:07.402542 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:04:07.402552 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:04:07.402560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:04:07.402590 systemd-journald[217]: Collecting audit messages is disabled. Jul 2 09:04:07.402611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:07.402622 systemd-journald[217]: Journal started Jul 2 09:04:07.402641 systemd-journald[217]: Runtime Journal (/run/log/journal/e6aa7933201a40c2b9493ffdadfc546e) is 8.0M, max 78.6M, 70.6M free. Jul 2 09:04:07.412385 systemd-modules-load[218]: Inserted module 'overlay' Jul 2 09:04:07.441326 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:04:07.441407 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:04:07.451472 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 2 09:04:07.458394 kernel: Bridge firewalling registered Jul 2 09:04:07.453977 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 09:04:07.475033 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:04:07.482580 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:04:07.493640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:04:07.503831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:07.531327 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:04:07.544191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:04:07.558244 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:04:07.595168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:04:07.604022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:07.619840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:04:07.626562 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:04:07.644289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:04:07.680260 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 09:04:07.695327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:04:07.708225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:04:07.719488 dracut-cmdline[250]: dracut-dracut-053 Jul 2 09:04:07.719488 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:04:07.770052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:04:07.786391 systemd-resolved[256]: Positive Trust Anchors: Jul 2 09:04:07.786402 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:04:07.786433 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:04:07.788741 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 2 09:04:07.790241 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:04:07.798563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:04:07.928021 kernel: SCSI subsystem initialized Jul 2 09:04:07.936017 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:04:07.947006 kernel: iscsi: registered transport (tcp) Jul 2 09:04:07.966001 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:04:07.966052 kernel: QLogic iSCSI HBA Driver Jul 2 09:04:08.007887 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 09:04:08.022353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 09:04:08.056821 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:04:08.056919 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:04:08.064410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 09:04:08.116019 kernel: raid6: neonx8 gen() 15716 MB/s Jul 2 09:04:08.136000 kernel: raid6: neonx4 gen() 15667 MB/s Jul 2 09:04:08.155999 kernel: raid6: neonx2 gen() 13240 MB/s Jul 2 09:04:08.176999 kernel: raid6: neonx1 gen() 10461 MB/s Jul 2 09:04:08.196998 kernel: raid6: int64x8 gen() 6960 MB/s Jul 2 09:04:08.217011 kernel: raid6: int64x4 gen() 7347 MB/s Jul 2 09:04:08.238007 kernel: raid6: int64x2 gen() 6131 MB/s Jul 2 09:04:08.262718 kernel: raid6: int64x1 gen() 5058 MB/s Jul 2 09:04:08.262736 kernel: raid6: using algorithm neonx8 gen() 15716 MB/s Jul 2 09:04:08.287248 kernel: raid6: .... xor() 12024 MB/s, rmw enabled Jul 2 09:04:08.287260 kernel: raid6: using neon recovery algorithm Jul 2 09:04:08.299179 kernel: xor: measuring software checksum speed Jul 2 09:04:08.299197 kernel: 8regs : 19864 MB/sec Jul 2 09:04:08.303207 kernel: 32regs : 19725 MB/sec Jul 2 09:04:08.307060 kernel: arm64_neon : 27134 MB/sec Jul 2 09:04:08.311140 kernel: xor: using function: arm64_neon (27134 MB/sec) Jul 2 09:04:08.365018 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 09:04:08.374573 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:04:08.391285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:04:08.415184 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jul 2 09:04:08.420967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:04:08.441165 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 09:04:08.465439 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jul 2 09:04:08.498721 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:04:08.518349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:04:08.561089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:04:08.583253 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 09:04:08.615370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 09:04:08.631878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:04:08.649069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:04:08.666315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:04:08.685024 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 09:04:08.686172 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 09:04:08.720539 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 09:04:08.720608 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 09:04:08.721007 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 09:04:08.726947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:04:08.727145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:08.762342 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 09:04:08.762398 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 09:04:08.762409 kernel: PTP clock support registered Jul 2 09:04:08.762419 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 09:04:08.774555 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:04:08.818168 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 09:04:08.818203 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 09:04:08.818382 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 09:04:08.804106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:04:08.830998 kernel: scsi host0: storvsc_host_t Jul 2 09:04:08.831202 kernel: scsi host1: storvsc_host_t Jul 2 09:04:08.831226 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 09:04:08.804351 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:08.870060 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 09:04:08.870085 kernel: hv_vmbus: registering driver hv_utils Jul 2 09:04:08.870095 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 09:04:08.870105 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 09:04:08.870122 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 09:04:08.870294 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 09:04:08.827922 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:08.377390 systemd-resolved[256]: Clock change detected. Flushing caches. Jul 2 09:04:08.403433 systemd-journald[217]: Time jumped backwards, rotating. Jul 2 09:04:08.394843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:08.430559 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 09:04:08.463440 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 09:04:08.463482 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 09:04:08.488692 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 09:04:08.488820 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 09:04:08.488905 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 09:04:08.488997 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: VF slot 1 added Jul 2 09:04:08.489101 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 09:04:08.489188 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 09:04:08.489271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:08.489280 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 09:04:08.417135 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:04:08.472561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:08.523969 kernel: hv_vmbus: registering driver hv_pci Jul 2 09:04:08.523990 kernel: hv_pci bdc93c64-c0f7-439e-8025-446bcfe02f12: PCI VMBus probing: Using version 0x10004 Jul 2 09:04:08.612388 kernel: hv_pci bdc93c64-c0f7-439e-8025-446bcfe02f12: PCI host bridge to bus c0f7:00 Jul 2 09:04:08.612527 kernel: pci_bus c0f7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 09:04:08.612632 kernel: pci_bus c0f7:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 09:04:08.612709 kernel: pci c0f7:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 09:04:08.612813 kernel: pci c0f7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 09:04:08.612899 kernel: pci c0f7:00:02.0: enabling Extended Tags Jul 2 09:04:08.612983 kernel: pci c0f7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c0f7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 09:04:08.613066 kernel: pci_bus c0f7:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 09:04:08.613143 kernel: pci c0f7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 09:04:08.489624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:04:08.548604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:08.659750 kernel: mlx5_core c0f7:00:02.0: enabling device (0000 -> 0002) Jul 2 09:04:08.890744 kernel: mlx5_core c0f7:00:02.0: firmware version: 16.30.1284 Jul 2 09:04:08.890892 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: VF registering: eth1 Jul 2 09:04:08.890983 kernel: mlx5_core c0f7:00:02.0 eth1: joined to eth0 Jul 2 09:04:08.891138 kernel: mlx5_core c0f7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 2 09:04:08.899391 kernel: mlx5_core c0f7:00:02.0 enP49399s1: renamed from eth1 Jul 2 09:04:09.131558 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 09:04:09.432388 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (487) Jul 2 09:04:09.447889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 09:04:09.483832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 09:04:09.558549 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (488) Jul 2 09:04:09.572568 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 09:04:09.580336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 09:04:09.612625 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 09:04:09.640402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:09.648387 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:10.656460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 09:04:10.657319 disk-uuid[599]: The operation has completed successfully. Jul 2 09:04:10.720604 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:04:10.722791 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 09:04:10.762560 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 09:04:10.776319 sh[685]: Success Jul 2 09:04:10.806402 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 09:04:10.990118 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 09:04:11.013022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 09:04:11.024095 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 09:04:11.052645 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 09:04:11.052711 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:11.052732 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 09:04:11.064488 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 09:04:11.069046 kernel: BTRFS info (device dm-0): using free space tree Jul 2 09:04:11.349830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 09:04:11.355533 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 09:04:11.374645 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 09:04:11.382519 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 09:04:11.415334 kernel: BTRFS info (device sda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:11.415399 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:11.419978 kernel: BTRFS info (device sda6): using free space tree Jul 2 09:04:11.454415 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 09:04:11.464082 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 09:04:11.476289 kernel: BTRFS info (device sda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:11.484944 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 09:04:11.499588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 09:04:11.546415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:04:11.565517 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:04:11.592769 systemd-networkd[869]: lo: Link UP Jul 2 09:04:11.592783 systemd-networkd[869]: lo: Gained carrier Jul 2 09:04:11.594348 systemd-networkd[869]: Enumeration completed Jul 2 09:04:11.596527 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:04:11.597019 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:04:11.597023 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:04:11.603513 systemd[1]: Reached target network.target - Network. Jul 2 09:04:11.683373 kernel: mlx5_core c0f7:00:02.0 enP49399s1: Link up Jul 2 09:04:11.729387 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: Data path switched to VF: enP49399s1 Jul 2 09:04:11.729676 systemd-networkd[869]: enP49399s1: Link UP Jul 2 09:04:11.729780 systemd-networkd[869]: eth0: Link UP Jul 2 09:04:11.729915 systemd-networkd[869]: eth0: Gained carrier Jul 2 09:04:11.729925 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:04:11.741403 systemd-networkd[869]: enP49399s1: Gained carrier Jul 2 09:04:11.767405 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 09:04:12.320936 ignition[821]: Ignition 2.18.0 Jul 2 09:04:12.320953 ignition[821]: Stage: fetch-offline Jul 2 09:04:12.320990 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.328255 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:04:12.320998 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.321087 ignition[821]: parsed url from cmdline: "" Jul 2 09:04:12.321090 ignition[821]: no config URL provided Jul 2 09:04:12.321094 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:04:12.321101 ignition[821]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:04:12.356679 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 09:04:12.321106 ignition[821]: failed to fetch config: resource requires networking Jul 2 09:04:12.321290 ignition[821]: Ignition finished successfully Jul 2 09:04:12.375773 ignition[879]: Ignition 2.18.0 Jul 2 09:04:12.375785 ignition[879]: Stage: fetch Jul 2 09:04:12.376057 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.376070 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.376208 ignition[879]: parsed url from cmdline: "" Jul 2 09:04:12.376212 ignition[879]: no config URL provided Jul 2 09:04:12.376218 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:04:12.376229 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:04:12.376254 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 09:04:12.477623 ignition[879]: GET result: OK Jul 2 09:04:12.478292 ignition[879]: config has been read from IMDS userdata Jul 2 09:04:12.478348 ignition[879]: parsing config with SHA512: 4bdca900d1a2aa8f8ad8fe2c606cf830a1d3dfb214eedce06248f45462d67b90d948527e7bf2735a360d5893e7559777b2174d43c9e5da1b6cd40e3c2d3bbdce Jul 2 09:04:12.482552 unknown[879]: fetched base config from "system" Jul 2 09:04:12.483009 ignition[879]: fetch: fetch complete Jul 2 09:04:12.482560 unknown[879]: fetched base config from "system" Jul 2 09:04:12.483014 ignition[879]: fetch: fetch passed Jul 2 09:04:12.482565 unknown[879]: fetched user config from "azure" Jul 2 09:04:12.483070 ignition[879]: Ignition finished successfully Jul 2 09:04:12.486978 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 09:04:12.510526 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 09:04:12.528853 ignition[886]: Ignition 2.18.0 Jul 2 09:04:12.528868 ignition[886]: Stage: kargs Jul 2 09:04:12.538570 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 09:04:12.529067 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.529077 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.556945 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 09:04:12.530140 ignition[886]: kargs: kargs passed Jul 2 09:04:12.530199 ignition[886]: Ignition finished successfully Jul 2 09:04:12.583175 ignition[894]: Ignition 2.18.0 Jul 2 09:04:12.583183 ignition[894]: Stage: disks Jul 2 09:04:12.588933 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 09:04:12.583653 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:12.597840 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 09:04:12.583667 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:12.610719 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:04:12.584906 ignition[894]: disks: disks passed Jul 2 09:04:12.624115 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:04:12.584988 ignition[894]: Ignition finished successfully Jul 2 09:04:12.637505 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:04:12.650293 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:04:12.682731 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 09:04:12.751433 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 09:04:12.762456 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 09:04:12.779614 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 09:04:12.836382 kernel: EXT4-fs (sda9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 09:04:12.837015 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 09:04:12.847729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 09:04:12.893451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:04:12.904436 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 09:04:12.912582 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 09:04:12.940151 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Jul 2 09:04:12.948525 systemd-networkd[869]: enP49399s1: Gained IPv6LL Jul 2 09:04:12.966923 kernel: BTRFS info (device sda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:12.966949 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:12.966959 kernel: BTRFS info (device sda6): using free space tree Jul 2 09:04:12.959717 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 09:04:12.959795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:04:12.987034 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 09:04:13.018180 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 09:04:13.006679 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 09:04:13.010904 systemd-networkd[869]: eth0: Gained IPv6LL Jul 2 09:04:13.025434 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:04:13.627664 coreos-metadata[917]: Jul 02 09:04:13.627 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 09:04:13.638124 coreos-metadata[917]: Jul 02 09:04:13.638 INFO Fetch successful Jul 2 09:04:13.643644 coreos-metadata[917]: Jul 02 09:04:13.643 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 09:04:13.667863 coreos-metadata[917]: Jul 02 09:04:13.667 INFO Fetch successful Jul 2 09:04:13.673860 coreos-metadata[917]: Jul 02 09:04:13.669 INFO wrote hostname ci-3975.1.1-a-59f2e70dce to /sysroot/etc/hostname Jul 2 09:04:13.674540 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 09:04:14.317882 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 09:04:14.340933 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jul 2 09:04:14.365217 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 09:04:14.390003 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 09:04:15.210395 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 09:04:15.226621 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 09:04:15.240873 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 09:04:15.254926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 09:04:15.270073 kernel: BTRFS info (device sda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:15.290127 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 09:04:15.308090 ignition[1034]: INFO : Ignition 2.18.0 Jul 2 09:04:15.312535 ignition[1034]: INFO : Stage: mount Jul 2 09:04:15.312535 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:15.312535 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:15.312535 ignition[1034]: INFO : mount: mount passed Jul 2 09:04:15.312535 ignition[1034]: INFO : Ignition finished successfully Jul 2 09:04:15.316766 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 09:04:15.346591 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 09:04:15.366085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:04:15.390377 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1046) Jul 2 09:04:15.407371 kernel: BTRFS info (device sda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:04:15.407423 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:04:15.407434 kernel: BTRFS info (device sda6): using free space tree Jul 2 09:04:15.419374 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 09:04:15.419678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:04:15.447584 ignition[1063]: INFO : Ignition 2.18.0 Jul 2 09:04:15.447584 ignition[1063]: INFO : Stage: files Jul 2 09:04:15.456096 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:15.456096 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:15.456096 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jul 2 09:04:15.475243 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 09:04:15.475243 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 09:04:15.523596 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 09:04:15.532286 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 09:04:15.532286 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 09:04:15.523997 unknown[1063]: wrote ssh authorized keys file for user: core Jul 2 09:04:15.558180 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:04:15.570336 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 09:04:15.871418 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 09:04:16.079087 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:16.091212 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 09:04:16.520060 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 09:04:17.024767 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:04:17.024767 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:04:17.044667 ignition[1063]: INFO : files: files passed Jul 2 09:04:17.044667 ignition[1063]: INFO : Ignition finished successfully Jul 2 09:04:17.055345 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 09:04:17.088711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 09:04:17.097569 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 09:04:17.185451 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:04:17.185451 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:04:17.132363 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 09:04:17.215521 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:04:17.132468 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 09:04:17.141600 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:04:17.152871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 09:04:17.176639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 09:04:17.226644 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:04:17.228392 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 09:04:17.238848 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 09:04:17.250889 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 09:04:17.262609 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 09:04:17.275649 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 09:04:17.303650 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:04:17.335673 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 09:04:17.352703 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:04:17.352940 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 09:04:17.366475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:04:17.380778 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:04:17.394787 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 09:04:17.407986 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:04:17.408056 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:04:17.426701 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 09:04:17.444765 systemd[1]: Stopped target basic.target - Basic System. Jul 2 09:04:17.455413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 09:04:17.467623 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:04:17.480400 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 09:04:17.492922 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 09:04:17.504743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:04:17.518234 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 09:04:17.531575 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 09:04:17.542489 systemd[1]: Stopped target swap.target - Swaps. Jul 2 09:04:17.552109 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:04:17.552191 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:04:17.569035 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:04:17.575610 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:04:17.587810 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 09:04:17.587856 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:04:17.600681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:04:17.600757 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 09:04:17.619976 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:04:17.620033 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:04:17.634248 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 09:04:17.634298 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 09:04:17.644926 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 09:04:17.644974 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 09:04:17.711690 ignition[1117]: INFO : Ignition 2.18.0 Jul 2 09:04:17.711690 ignition[1117]: INFO : Stage: umount Jul 2 09:04:17.711690 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:04:17.711690 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 09:04:17.711690 ignition[1117]: INFO : umount: umount passed Jul 2 09:04:17.711690 ignition[1117]: INFO : Ignition finished successfully Jul 2 09:04:17.678614 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 09:04:17.706081 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 09:04:17.720344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:04:17.720441 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:04:17.732384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:04:17.732451 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:04:17.745730 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 09:04:17.745850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 09:04:17.755905 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 09:04:17.755966 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 09:04:17.767526 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 09:04:17.767595 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 09:04:17.774805 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 09:04:17.774860 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 09:04:17.791851 systemd[1]: Stopped target network.target - Network. Jul 2 09:04:17.803805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 09:04:17.803887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:04:17.817600 systemd[1]: Stopped target paths.target - Path Units. Jul 2 09:04:17.829042 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:04:17.839387 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:04:17.847475 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 09:04:17.857577 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 09:04:17.870542 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:04:17.870595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:04:17.879484 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:04:17.879540 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:04:17.887392 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 09:04:17.887453 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 09:04:17.893634 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 09:04:17.893686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 09:04:17.905045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 09:04:17.917637 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 09:04:17.929856 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 09:04:17.934470 systemd-networkd[869]: eth0: DHCPv6 lease lost Jul 2 09:04:17.934730 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 09:04:17.934845 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 09:04:18.179896 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: Data path switched from VF: enP49399s1 Jul 2 09:04:17.944994 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 09:04:17.945110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:04:17.955061 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 09:04:17.955195 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 09:04:17.970507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 09:04:17.970557 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:04:18.003628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 09:04:18.013570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 09:04:18.013692 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:04:18.029814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:04:18.029889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:04:18.040874 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:04:18.040939 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 09:04:18.047271 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:04:18.094057 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:04:18.094228 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:04:18.107753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:04:18.107813 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 09:04:18.130090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:04:18.130148 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:04:18.140980 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:04:18.141042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:04:18.166859 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:04:18.166922 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 09:04:18.179702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:04:18.179763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:04:18.206542 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 09:04:18.224544 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:04:18.224621 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:04:18.453942 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 2 09:04:18.238243 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 09:04:18.238309 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:04:18.251493 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:04:18.251555 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:04:18.270504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:04:18.270567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:18.278979 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 09:04:18.279092 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 09:04:18.291716 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 09:04:18.293889 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 09:04:18.302080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:04:18.302165 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 09:04:18.317529 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 09:04:18.328106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:04:18.328210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 09:04:18.355531 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 09:04:18.372299 systemd[1]: Switching root. Jul 2 09:04:18.557700 systemd-journald[217]: Journal stopped Jul 2 09:04:23.295872 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 09:04:23.295903 kernel: SELinux: policy capability open_perms=1 Jul 2 09:04:23.295915 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 09:04:23.295927 kernel: SELinux: policy capability always_check_network=0 Jul 2 09:04:23.295937 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 09:04:23.295947 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 09:04:23.295957 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 09:04:23.295967 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 09:04:23.295979 kernel: audit: type=1403 audit(1719911059.709:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 09:04:23.295991 systemd[1]: Successfully loaded SELinux policy in 177.159ms. Jul 2 09:04:23.296006 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.677ms. Jul 2 09:04:23.296017 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:04:23.296030 systemd[1]: Detected virtualization microsoft. Jul 2 09:04:23.296041 systemd[1]: Detected architecture arm64. Jul 2 09:04:23.296052 systemd[1]: Detected first boot. Jul 2 09:04:23.296065 systemd[1]: Hostname set to . Jul 2 09:04:23.296075 systemd[1]: Initializing machine ID from random generator. Jul 2 09:04:23.296088 zram_generator::config[1157]: No configuration found. Jul 2 09:04:23.296097 systemd[1]: Populated /etc with preset unit settings. Jul 2 09:04:23.296108 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 09:04:23.296121 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 09:04:23.296133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 09:04:23.296146 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 09:04:23.296157 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 09:04:23.296170 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 09:04:23.296179 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 09:04:23.296192 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 09:04:23.296202 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 09:04:23.296214 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 09:04:23.296227 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 09:04:23.296237 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:04:23.296251 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:04:23.296261 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 09:04:23.296274 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 09:04:23.296284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 09:04:23.296294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:04:23.296303 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 09:04:23.296318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:04:23.296331 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 09:04:23.296340 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 09:04:23.296367 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 09:04:23.296380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 09:04:23.296390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:04:23.296404 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:04:23.296416 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:04:23.296426 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:04:23.296438 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 09:04:23.296449 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 09:04:23.296460 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:04:23.296471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:04:23.296488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:04:23.296499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 09:04:23.296511 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 09:04:23.296521 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 09:04:23.296534 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 09:04:23.296543 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 09:04:23.296554 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 09:04:23.296569 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 09:04:23.296581 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 09:04:23.296590 systemd[1]: Reached target machines.target - Containers. Jul 2 09:04:23.296604 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 09:04:23.296617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:04:23.296627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:04:23.296637 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 09:04:23.296652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:04:23.296663 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:04:23.296676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:04:23.296687 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 09:04:23.296700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:04:23.296710 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 09:04:23.296720 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 09:04:23.296734 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 09:04:23.296745 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 09:04:23.296755 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 09:04:23.296770 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:04:23.296783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:04:23.296793 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 09:04:23.296802 kernel: fuse: init (API version 7.39) Jul 2 09:04:23.296812 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 09:04:23.296825 kernel: loop: module loaded Jul 2 09:04:23.296833 kernel: ACPI: bus type drm_connector registered Jul 2 09:04:23.296843 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:04:23.296857 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 09:04:23.296889 systemd-journald[1252]: Collecting audit messages is disabled. Jul 2 09:04:23.296911 systemd[1]: Stopped verity-setup.service. Jul 2 09:04:23.296923 systemd-journald[1252]: Journal started Jul 2 09:04:23.296947 systemd-journald[1252]: Runtime Journal (/run/log/journal/135b00c90edc4691a9a749e9aeccf10f) is 8.0M, max 78.6M, 70.6M free. Jul 2 09:04:22.101905 systemd[1]: Queued start job for default target multi-user.target. Jul 2 09:04:22.252045 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 2 09:04:22.252419 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 09:04:22.252729 systemd[1]: systemd-journald.service: Consumed 3.300s CPU time. Jul 2 09:04:23.313942 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:04:23.314874 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 09:04:23.321256 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 09:04:23.327748 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 09:04:23.333301 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 09:04:23.339746 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 09:04:23.346194 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 09:04:23.351917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 09:04:23.360077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:04:23.368476 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 09:04:23.368621 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 09:04:23.375452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:04:23.375589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:04:23.382485 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:04:23.382622 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:04:23.389530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:04:23.391402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:04:23.398911 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 09:04:23.399053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 09:04:23.405319 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:04:23.405475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:04:23.411806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:04:23.420316 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 09:04:23.427954 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 09:04:23.435374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:04:23.451089 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 09:04:23.465481 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 09:04:23.473417 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 09:04:23.479821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 09:04:23.479865 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:04:23.486670 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 09:04:23.495443 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 09:04:23.503172 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 09:04:23.510001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:04:23.534505 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 09:04:23.541708 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 09:04:23.548182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:04:23.549412 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 09:04:23.555744 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:04:23.558604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:04:23.568619 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 09:04:23.582157 systemd-journald[1252]: Time spent on flushing to /var/log/journal/135b00c90edc4691a9a749e9aeccf10f is 1.195667s for 895 entries. Jul 2 09:04:23.582157 systemd-journald[1252]: System Journal (/var/log/journal/135b00c90edc4691a9a749e9aeccf10f) is 11.8M, max 2.6G, 2.6G free. Jul 2 09:04:25.420335 systemd-journald[1252]: Received client request to flush runtime journal. Jul 2 09:04:25.420482 kernel: loop0: detected capacity change from 0 to 59672 Jul 2 09:04:25.420507 kernel: block loop0: the capability attribute has been deprecated. Jul 2 09:04:25.420712 systemd-journald[1252]: /var/log/journal/135b00c90edc4691a9a749e9aeccf10f/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 2 09:04:25.420738 systemd-journald[1252]: Rotating system journal. Jul 2 09:04:23.589535 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:04:23.603337 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 09:04:23.612855 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 09:04:23.619950 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 09:04:23.627459 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 09:04:23.638767 udevadm[1293]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 09:04:23.763448 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 09:04:23.772150 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 09:04:23.788564 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 09:04:23.828450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:04:24.282814 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jul 2 09:04:24.282825 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jul 2 09:04:24.288415 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:04:24.309625 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 09:04:24.837947 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 09:04:24.849568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:04:24.869260 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jul 2 09:04:24.869271 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jul 2 09:04:24.872918 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:04:25.214265 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 09:04:25.215021 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 09:04:25.421892 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 09:04:25.546388 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 09:04:25.581392 kernel: loop1: detected capacity change from 0 to 56592 Jul 2 09:04:26.147533 kernel: loop2: detected capacity change from 0 to 194512 Jul 2 09:04:26.663032 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 09:04:26.677555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:04:26.690384 kernel: loop3: detected capacity change from 0 to 113672 Jul 2 09:04:26.708181 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Jul 2 09:04:26.843404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:04:26.859268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:04:26.900166 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 09:04:26.955487 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1334) Jul 2 09:04:26.960105 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 09:04:27.001895 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 09:04:27.145386 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 09:04:27.225410 kernel: hv_vmbus: registering driver hv_balloon Jul 2 09:04:27.235865 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 09:04:27.235977 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 2 09:04:27.243753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:27.252834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:04:27.253011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:27.275608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:27.348219 systemd-networkd[1324]: lo: Link UP Jul 2 09:04:27.348235 systemd-networkd[1324]: lo: Gained carrier Jul 2 09:04:27.350247 systemd-networkd[1324]: Enumeration completed Jul 2 09:04:27.350382 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:04:27.351770 systemd-networkd[1324]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:04:27.351776 systemd-networkd[1324]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:04:27.362587 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 09:04:27.415391 kernel: mlx5_core c0f7:00:02.0 enP49399s1: Link up Jul 2 09:04:27.442376 kernel: hv_netvsc 002248b8-8d48-0022-48b8-8d48002248b8 eth0: Data path switched to VF: enP49399s1 Jul 2 09:04:27.443008 systemd-networkd[1324]: enP49399s1: Link UP Jul 2 09:04:27.443104 systemd-networkd[1324]: eth0: Link UP Jul 2 09:04:27.443107 systemd-networkd[1324]: eth0: Gained carrier Jul 2 09:04:27.443122 systemd-networkd[1324]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:04:27.447691 systemd-networkd[1324]: enP49399s1: Gained carrier Jul 2 09:04:27.455413 systemd-networkd[1324]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 09:04:27.482445 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 09:04:27.482550 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 09:04:27.489481 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 09:04:27.494611 kernel: Console: switching to colour dummy device 80x25 Jul 2 09:04:27.496386 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 09:04:27.505268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:04:27.505608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:27.524699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:04:27.552446 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1323) Jul 2 09:04:27.590153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 09:04:27.602503 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 09:04:27.632499 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 09:04:27.653407 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 09:04:27.664408 kernel: loop5: detected capacity change from 0 to 56592 Jul 2 09:04:27.672392 kernel: loop6: detected capacity change from 0 to 194512 Jul 2 09:04:27.682392 kernel: loop7: detected capacity change from 0 to 113672 Jul 2 09:04:27.685294 (sd-merge)[1412]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 2 09:04:27.685746 (sd-merge)[1412]: Merged extensions into '/usr'. Jul 2 09:04:27.698748 systemd[1]: Reloading requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 09:04:27.698769 systemd[1]: Reloading... Jul 2 09:04:27.776422 zram_generator::config[1444]: No configuration found. Jul 2 09:04:27.916793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:04:27.990911 systemd[1]: Reloading finished in 291 ms. Jul 2 09:04:28.020036 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 09:04:28.028019 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 09:04:28.038867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:04:28.055631 systemd[1]: Starting ensure-sysext.service... Jul 2 09:04:28.064364 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 09:04:28.073465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:04:28.083673 systemd[1]: Reloading requested from client PID 1504 ('systemctl') (unit ensure-sysext.service)... Jul 2 09:04:28.083690 systemd[1]: Reloading... Jul 2 09:04:28.118262 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 09:04:28.120291 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 09:04:28.121194 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 09:04:28.121542 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Jul 2 09:04:28.121671 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Jul 2 09:04:28.126203 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:04:28.131418 systemd-tmpfiles[1506]: Skipping /boot Jul 2 09:04:28.135407 lvm[1505]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:04:28.148903 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:04:28.149049 systemd-tmpfiles[1506]: Skipping /boot Jul 2 09:04:28.186382 zram_generator::config[1544]: No configuration found. Jul 2 09:04:28.293982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:04:28.366732 systemd[1]: Reloading finished in 282 ms. Jul 2 09:04:28.386904 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 09:04:28.396064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:04:28.409473 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:04:28.423640 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:04:28.432744 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 09:04:28.442082 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 09:04:28.457229 lvm[1597]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:04:28.460746 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 09:04:28.474234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:04:28.481951 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 09:04:28.501995 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 09:04:28.518273 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 09:04:28.533442 systemd[1]: Finished ensure-sysext.service. Jul 2 09:04:28.541270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:04:28.549952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:04:28.557079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:04:28.566269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:04:28.578605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:04:28.584529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:04:28.584825 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 09:04:28.591181 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 09:04:28.599996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:04:28.600152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:04:28.606926 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:04:28.607069 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:04:28.614723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:04:28.614866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:04:28.623953 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:04:28.624111 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:04:28.634648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:04:28.634747 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:04:28.651812 systemd-resolved[1604]: Positive Trust Anchors: Jul 2 09:04:28.651831 systemd-resolved[1604]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:04:28.651860 systemd-resolved[1604]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:04:28.671180 augenrules[1622]: No rules Jul 2 09:04:28.674284 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:04:28.674677 systemd-resolved[1604]: Using system hostname 'ci-3975.1.1-a-59f2e70dce'. Jul 2 09:04:28.681120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:04:28.687892 systemd[1]: Reached target network.target - Network. Jul 2 09:04:28.694198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:04:29.007418 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 09:04:29.015864 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:04:29.392561 systemd-networkd[1324]: enP49399s1: Gained IPv6LL Jul 2 09:04:29.456475 systemd-networkd[1324]: eth0: Gained IPv6LL Jul 2 09:04:29.460331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 09:04:29.468617 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 09:04:32.270281 ldconfig[1285]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 09:04:32.306396 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 09:04:32.319579 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 09:04:32.334737 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 09:04:32.341135 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:04:32.347039 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 09:04:32.354304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 09:04:32.362006 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 09:04:32.367952 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 09:04:32.375256 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 09:04:32.382139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:04:32.382187 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:04:32.387453 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:04:32.393471 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 09:04:32.401151 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 09:04:32.414109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 09:04:32.422118 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 09:04:32.428177 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:04:32.433728 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:04:32.439084 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:04:32.439122 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:04:32.448504 systemd[1]: Starting chronyd.service - NTP client/server... Jul 2 09:04:32.457507 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 09:04:32.469580 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 09:04:32.476371 (chronyd)[1640]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 2 09:04:32.481012 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 09:04:32.488491 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 09:04:32.499620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 09:04:32.502572 jq[1646]: false Jul 2 09:04:32.505687 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 09:04:32.506838 chronyd[1649]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 2 09:04:32.508448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:04:32.518593 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 09:04:32.526573 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 09:04:32.534735 chronyd[1649]: Timezone right/UTC failed leap second check, ignoring Jul 2 09:04:32.534970 chronyd[1649]: Loaded seccomp filter (level 2) Jul 2 09:04:32.539944 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 09:04:32.547073 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 09:04:32.560056 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 09:04:32.566927 extend-filesystems[1647]: Found loop4 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found loop5 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found loop6 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found loop7 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda1 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda2 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda3 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found usr Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda4 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda6 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda7 Jul 2 09:04:32.573394 extend-filesystems[1647]: Found sda9 Jul 2 09:04:32.573394 extend-filesystems[1647]: Checking size of /dev/sda9 Jul 2 09:04:32.772271 coreos-metadata[1642]: Jul 02 09:04:32.747 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 09:04:32.772271 coreos-metadata[1642]: Jul 02 09:04:32.755 INFO Fetch successful Jul 2 09:04:32.772271 coreos-metadata[1642]: Jul 02 09:04:32.755 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 2 09:04:32.772271 coreos-metadata[1642]: Jul 02 09:04:32.764 INFO Fetch successful Jul 2 09:04:32.772271 coreos-metadata[1642]: Jul 02 09:04:32.764 INFO Fetching http://168.63.129.16/machine/4263bc72-faec-43ee-93bc-f79e26416463/53ada769%2D09a4%2D4ac9%2D87da%2D5b0dec543562.%5Fci%2D3975.1.1%2Da%2D59f2e70dce?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 2 09:04:32.772271 coreos-metadata[1642]: Jul 02 09:04:32.767 INFO Fetch successful Jul 2 09:04:32.772271 coreos-metadata[1642]: Jul 02 09:04:32.769 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 2 09:04:32.585570 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 09:04:32.789004 extend-filesystems[1647]: Old size kept for /dev/sda9 Jul 2 09:04:32.789004 extend-filesystems[1647]: Found sr0 Jul 2 09:04:32.635761 dbus-daemon[1643]: [system] SELinux support is enabled Jul 2 09:04:32.832770 coreos-metadata[1642]: Jul 02 09:04:32.783 INFO Fetch successful Jul 2 09:04:32.597140 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 09:04:32.597670 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 09:04:32.606576 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 09:04:32.833133 update_engine[1668]: I0702 09:04:32.715935 1668 main.cc:92] Flatcar Update Engine starting Jul 2 09:04:32.833133 update_engine[1668]: I0702 09:04:32.721138 1668 update_check_scheduler.cc:74] Next update check in 5m57s Jul 2 09:04:32.615769 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 09:04:32.834499 jq[1671]: true Jul 2 09:04:32.631590 systemd[1]: Started chronyd.service - NTP client/server. Jul 2 09:04:32.650891 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 09:04:32.671963 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:04:32.672203 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 09:04:32.672662 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 09:04:32.672832 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 09:04:32.722468 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:04:32.722681 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 09:04:32.741883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 09:04:32.794103 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:04:32.794263 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 09:04:32.847384 jq[1696]: true Jul 2 09:04:32.848656 (ntainerd)[1697]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 09:04:32.859817 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 09:04:32.872511 systemd[1]: Started update-engine.service - Update Engine. Jul 2 09:04:32.882508 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 09:04:32.882622 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 09:04:32.882651 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 09:04:32.890190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 09:04:32.890218 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 09:04:32.902607 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 09:04:32.914392 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1690) Jul 2 09:04:33.103008 tar[1688]: linux-arm64/helm Jul 2 09:04:33.106791 systemd-logind[1665]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 2 09:04:33.107474 systemd-logind[1665]: New seat seat0. Jul 2 09:04:33.110451 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 09:04:33.330891 sshd_keygen[1670]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 09:04:33.356336 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 09:04:33.370857 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 09:04:33.382328 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 2 09:04:33.397776 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 09:04:33.398324 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 09:04:33.416712 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 09:04:33.454646 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 2 09:04:33.574655 locksmithd[1728]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 09:04:33.640270 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 09:04:33.657587 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 09:04:33.671579 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 09:04:33.679102 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 09:04:33.755241 bash[1737]: Updated "/home/core/.ssh/authorized_keys" Jul 2 09:04:33.757466 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 09:04:33.770216 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 09:04:33.844520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:04:33.862725 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:04:33.964159 tar[1688]: linux-arm64/LICENSE Jul 2 09:04:33.964159 tar[1688]: linux-arm64/README.md Jul 2 09:04:33.976244 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 09:04:34.081718 containerd[1697]: time="2024-07-02T09:04:34.081605120Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 09:04:34.124240 containerd[1697]: time="2024-07-02T09:04:34.123784840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 09:04:34.124240 containerd[1697]: time="2024-07-02T09:04:34.123840200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.126967040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.127013480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.127268000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.127286120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.127372360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.127427000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.127438840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127769 containerd[1697]: time="2024-07-02T09:04:34.127496560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127998 containerd[1697]: time="2024-07-02T09:04:34.127895320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 09:04:34.127998 containerd[1697]: time="2024-07-02T09:04:34.127927440Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 09:04:34.127998 containerd[1697]: time="2024-07-02T09:04:34.127942920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:04:34.128190 containerd[1697]: time="2024-07-02T09:04:34.128163160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:04:34.128224 containerd[1697]: time="2024-07-02T09:04:34.128187400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 09:04:34.128285 containerd[1697]: time="2024-07-02T09:04:34.128259600Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 09:04:34.128285 containerd[1697]: time="2024-07-02T09:04:34.128282560Z" level=info msg="metadata content store policy set" policy=shared Jul 2 09:04:34.295006 kubelet[1796]: E0702 09:04:34.294913 1796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:04:34.298429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:04:34.298588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:04:34.318038 containerd[1697]: time="2024-07-02T09:04:34.317979120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 09:04:34.318038 containerd[1697]: time="2024-07-02T09:04:34.318043280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 09:04:34.318347 containerd[1697]: time="2024-07-02T09:04:34.318059240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 09:04:34.318347 containerd[1697]: time="2024-07-02T09:04:34.318097720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 09:04:34.318347 containerd[1697]: time="2024-07-02T09:04:34.318117120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 09:04:34.318347 containerd[1697]: time="2024-07-02T09:04:34.318129200Z" level=info msg="NRI interface is disabled by configuration." Jul 2 09:04:34.318347 containerd[1697]: time="2024-07-02T09:04:34.318145480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 09:04:34.318468 containerd[1697]: time="2024-07-02T09:04:34.318376040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 09:04:34.318468 containerd[1697]: time="2024-07-02T09:04:34.318397440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 09:04:34.318468 containerd[1697]: time="2024-07-02T09:04:34.318411480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 09:04:34.318468 containerd[1697]: time="2024-07-02T09:04:34.318425760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 09:04:34.318468 containerd[1697]: time="2024-07-02T09:04:34.318442600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.318468 containerd[1697]: time="2024-07-02T09:04:34.318462080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.318572 containerd[1697]: time="2024-07-02T09:04:34.318476680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.318572 containerd[1697]: time="2024-07-02T09:04:34.318490640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.318572 containerd[1697]: time="2024-07-02T09:04:34.318505960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.318572 containerd[1697]: time="2024-07-02T09:04:34.318519920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.318572 containerd[1697]: time="2024-07-02T09:04:34.318534240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.318572 containerd[1697]: time="2024-07-02T09:04:34.318550400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 09:04:34.318675 containerd[1697]: time="2024-07-02T09:04:34.318662280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 09:04:34.318978 containerd[1697]: time="2024-07-02T09:04:34.318953200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 09:04:34.319025 containerd[1697]: time="2024-07-02T09:04:34.318989760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.319025 containerd[1697]: time="2024-07-02T09:04:34.319004800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 09:04:34.319066 containerd[1697]: time="2024-07-02T09:04:34.319031480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 09:04:34.319808 containerd[1697]: time="2024-07-02T09:04:34.319684520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.319808 containerd[1697]: time="2024-07-02T09:04:34.319723400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.319808 containerd[1697]: time="2024-07-02T09:04:34.319740160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.319808 containerd[1697]: time="2024-07-02T09:04:34.319753680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.319808 containerd[1697]: time="2024-07-02T09:04:34.319770080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.319808 containerd[1697]: time="2024-07-02T09:04:34.319785440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.319808 containerd[1697]: time="2024-07-02T09:04:34.319799880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320025 containerd[1697]: time="2024-07-02T09:04:34.319814720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320025 containerd[1697]: time="2024-07-02T09:04:34.319831760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 09:04:34.320063 containerd[1697]: time="2024-07-02T09:04:34.320032160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320063 containerd[1697]: time="2024-07-02T09:04:34.320051360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320100 containerd[1697]: time="2024-07-02T09:04:34.320065400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320100 containerd[1697]: time="2024-07-02T09:04:34.320081280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320100 containerd[1697]: time="2024-07-02T09:04:34.320095240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320153 containerd[1697]: time="2024-07-02T09:04:34.320111440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320153 containerd[1697]: time="2024-07-02T09:04:34.320124440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320153 containerd[1697]: time="2024-07-02T09:04:34.320138120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 09:04:34.320551 containerd[1697]: time="2024-07-02T09:04:34.320477760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 09:04:34.320710 containerd[1697]: time="2024-07-02T09:04:34.320555400Z" level=info msg="Connect containerd service" Jul 2 09:04:34.320710 containerd[1697]: time="2024-07-02T09:04:34.320602200Z" level=info msg="using legacy CRI server" Jul 2 09:04:34.320710 containerd[1697]: time="2024-07-02T09:04:34.320610200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 09:04:34.320710 containerd[1697]: time="2024-07-02T09:04:34.320704960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 09:04:34.321442 containerd[1697]: time="2024-07-02T09:04:34.321409120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:04:34.321526 containerd[1697]: time="2024-07-02T09:04:34.321470920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 09:04:34.321526 containerd[1697]: time="2024-07-02T09:04:34.321491040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 09:04:34.321526 containerd[1697]: time="2024-07-02T09:04:34.321505880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 09:04:34.321526 containerd[1697]: time="2024-07-02T09:04:34.321519880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321589200Z" level=info msg="Start subscribing containerd event" Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321653240Z" level=info msg="Start recovering state" Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321748200Z" level=info msg="Start event monitor" Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321763800Z" level=info msg="Start snapshots syncer" Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321775120Z" level=info msg="Start cni network conf syncer for default" Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321785880Z" level=info msg="Start streaming server" Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321804320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321839520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 09:04:34.328627 containerd[1697]: time="2024-07-02T09:04:34.321898720Z" level=info msg="containerd successfully booted in 0.246924s" Jul 2 09:04:34.322135 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 09:04:34.330524 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 09:04:34.337756 systemd[1]: Startup finished in 719ms (kernel) + 13.260s (initrd) + 14.804s (userspace) = 28.784s. Jul 2 09:04:34.604160 login[1784]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 09:04:34.605690 login[1785]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 09:04:34.618202 systemd-logind[1665]: New session 2 of user core. Jul 2 09:04:34.619265 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 09:04:34.627741 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 09:04:34.633196 systemd-logind[1665]: New session 1 of user core. Jul 2 09:04:34.639997 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 09:04:34.649811 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 09:04:34.653789 (systemd)[1817]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:04:34.899142 systemd[1817]: Queued start job for default target default.target. Jul 2 09:04:34.912312 systemd[1817]: Created slice app.slice - User Application Slice. Jul 2 09:04:34.912348 systemd[1817]: Reached target paths.target - Paths. Jul 2 09:04:34.912384 systemd[1817]: Reached target timers.target - Timers. Jul 2 09:04:34.917519 systemd[1817]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 09:04:34.929458 systemd[1817]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 09:04:34.930284 systemd[1817]: Reached target sockets.target - Sockets. Jul 2 09:04:34.930303 systemd[1817]: Reached target basic.target - Basic System. Jul 2 09:04:34.930399 systemd[1817]: Reached target default.target - Main User Target. Jul 2 09:04:34.930434 systemd[1817]: Startup finished in 269ms. Jul 2 09:04:34.931496 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 09:04:34.937533 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 09:04:34.938279 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 09:04:35.309382 waagent[1777]: 2024-07-02T09:04:35.305723Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 2 09:04:35.311963 waagent[1777]: 2024-07-02T09:04:35.311880Z INFO Daemon Daemon OS: flatcar 3975.1.1 Jul 2 09:04:35.316914 waagent[1777]: 2024-07-02T09:04:35.316847Z INFO Daemon Daemon Python: 3.11.9 Jul 2 09:04:35.324031 waagent[1777]: 2024-07-02T09:04:35.321901Z INFO Daemon Daemon Run daemon Jul 2 09:04:35.326578 waagent[1777]: 2024-07-02T09:04:35.326523Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.1.1' Jul 2 09:04:35.335601 waagent[1777]: 2024-07-02T09:04:35.335528Z INFO Daemon Daemon Using waagent for provisioning Jul 2 09:04:35.341017 waagent[1777]: 2024-07-02T09:04:35.340962Z INFO Daemon Daemon Activate resource disk Jul 2 09:04:35.346218 waagent[1777]: 2024-07-02T09:04:35.346157Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 09:04:35.358264 waagent[1777]: 2024-07-02T09:04:35.358194Z INFO Daemon Daemon Found device: None Jul 2 09:04:35.363286 waagent[1777]: 2024-07-02T09:04:35.363224Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 09:04:35.371789 waagent[1777]: 2024-07-02T09:04:35.371729Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 09:04:35.385014 waagent[1777]: 2024-07-02T09:04:35.384945Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 09:04:35.391025 waagent[1777]: 2024-07-02T09:04:35.390964Z INFO Daemon Daemon Running default provisioning handler Jul 2 09:04:35.403759 waagent[1777]: 2024-07-02T09:04:35.403152Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 2 09:04:35.418771 waagent[1777]: 2024-07-02T09:04:35.418698Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 09:04:35.429095 waagent[1777]: 2024-07-02T09:04:35.429029Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 09:04:35.434318 waagent[1777]: 2024-07-02T09:04:35.434257Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 09:04:35.851308 waagent[1777]: 2024-07-02T09:04:35.848013Z INFO Daemon Daemon Successfully mounted dvd Jul 2 09:04:35.877533 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 09:04:35.879988 waagent[1777]: 2024-07-02T09:04:35.879525Z INFO Daemon Daemon Detect protocol endpoint Jul 2 09:04:35.884595 waagent[1777]: 2024-07-02T09:04:35.884524Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 09:04:35.890314 waagent[1777]: 2024-07-02T09:04:35.890253Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 09:04:35.897793 waagent[1777]: 2024-07-02T09:04:35.897724Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 09:04:35.903160 waagent[1777]: 2024-07-02T09:04:35.903103Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 09:04:35.908251 waagent[1777]: 2024-07-02T09:04:35.908195Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 09:04:35.923329 waagent[1777]: 2024-07-02T09:04:35.923282Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 09:04:35.930113 waagent[1777]: 2024-07-02T09:04:35.930082Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 09:04:35.935391 waagent[1777]: 2024-07-02T09:04:35.935329Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 09:04:36.304083 waagent[1777]: 2024-07-02T09:04:36.303961Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 09:04:36.311076 waagent[1777]: 2024-07-02T09:04:36.311001Z INFO Daemon Daemon Forcing an update of the goal state. Jul 2 09:04:36.322303 waagent[1777]: 2024-07-02T09:04:36.322244Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 09:04:36.347110 waagent[1777]: 2024-07-02T09:04:36.347054Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jul 2 09:04:36.353362 waagent[1777]: 2024-07-02T09:04:36.353309Z INFO Daemon Jul 2 09:04:36.356688 waagent[1777]: 2024-07-02T09:04:36.356632Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 1ac1267a-9921-443b-ada1-e082cd10b507 eTag: 12584605701013885103 source: Fabric] Jul 2 09:04:36.368572 waagent[1777]: 2024-07-02T09:04:36.368517Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 2 09:04:36.375546 waagent[1777]: 2024-07-02T09:04:36.375498Z INFO Daemon Jul 2 09:04:36.378405 waagent[1777]: 2024-07-02T09:04:36.378341Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 2 09:04:36.389439 waagent[1777]: 2024-07-02T09:04:36.389396Z INFO Daemon Daemon Downloading artifacts profile blob Jul 2 09:04:36.484814 waagent[1777]: 2024-07-02T09:04:36.484716Z INFO Daemon Downloaded certificate {'thumbprint': 'AC4C3FB9DE5A1235ECCAD0CCEFF1DD5FC42E55A4', 'hasPrivateKey': True} Jul 2 09:04:36.494849 waagent[1777]: 2024-07-02T09:04:36.494794Z INFO Daemon Downloaded certificate {'thumbprint': 'A136EF5158D8A20F1D869C3EC2590227ED8A13F7', 'hasPrivateKey': False} Jul 2 09:04:36.504839 waagent[1777]: 2024-07-02T09:04:36.504788Z INFO Daemon Fetch goal state completed Jul 2 09:04:36.516157 waagent[1777]: 2024-07-02T09:04:36.516107Z INFO Daemon Daemon Starting provisioning Jul 2 09:04:36.521122 waagent[1777]: 2024-07-02T09:04:36.521057Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 09:04:36.525800 waagent[1777]: 2024-07-02T09:04:36.525748Z INFO Daemon Daemon Set hostname [ci-3975.1.1-a-59f2e70dce] Jul 2 09:04:36.561382 waagent[1777]: 2024-07-02T09:04:36.560807Z INFO Daemon Daemon Publish hostname [ci-3975.1.1-a-59f2e70dce] Jul 2 09:04:36.567412 waagent[1777]: 2024-07-02T09:04:36.567321Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 09:04:36.573791 waagent[1777]: 2024-07-02T09:04:36.573730Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 09:04:36.601190 systemd-networkd[1324]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:04:36.601199 systemd-networkd[1324]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:04:36.601248 systemd-networkd[1324]: eth0: DHCP lease lost Jul 2 09:04:36.602708 waagent[1777]: 2024-07-02T09:04:36.602616Z INFO Daemon Daemon Create user account if not exists Jul 2 09:04:36.608510 waagent[1777]: 2024-07-02T09:04:36.608444Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 09:04:36.614210 waagent[1777]: 2024-07-02T09:04:36.614146Z INFO Daemon Daemon Configure sudoer Jul 2 09:04:36.618882 waagent[1777]: 2024-07-02T09:04:36.618816Z INFO Daemon Daemon Configure sshd Jul 2 09:04:36.623495 waagent[1777]: 2024-07-02T09:04:36.623432Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 2 09:04:36.636432 waagent[1777]: 2024-07-02T09:04:36.636344Z INFO Daemon Daemon Deploy ssh public key. Jul 2 09:04:36.640948 systemd-networkd[1324]: eth0: DHCPv6 lease lost Jul 2 09:04:36.668420 systemd-networkd[1324]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 09:04:37.944393 waagent[1777]: 2024-07-02T09:04:37.939531Z INFO Daemon Daemon Provisioning complete Jul 2 09:04:37.961097 waagent[1777]: 2024-07-02T09:04:37.961006Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 09:04:37.968702 waagent[1777]: 2024-07-02T09:04:37.968599Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 09:04:37.982152 waagent[1777]: 2024-07-02T09:04:37.982088Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 2 09:04:38.125346 waagent[1865]: 2024-07-02T09:04:38.125265Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 2 09:04:38.126201 waagent[1865]: 2024-07-02T09:04:38.125790Z INFO ExtHandler ExtHandler OS: flatcar 3975.1.1 Jul 2 09:04:38.126201 waagent[1865]: 2024-07-02T09:04:38.125865Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 2 09:04:39.175320 waagent[1865]: 2024-07-02T09:04:39.175187Z INFO ExtHandler ExtHandler Distro: flatcar-3975.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 09:04:39.175659 waagent[1865]: 2024-07-02T09:04:39.175513Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 09:04:39.175659 waagent[1865]: 2024-07-02T09:04:39.175587Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 09:04:39.190031 waagent[1865]: 2024-07-02T09:04:39.189927Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 09:04:39.200027 waagent[1865]: 2024-07-02T09:04:39.199980Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 09:04:39.200635 waagent[1865]: 2024-07-02T09:04:39.200590Z INFO ExtHandler Jul 2 09:04:39.200713 waagent[1865]: 2024-07-02T09:04:39.200683Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7d37759c-96ec-484b-bf5b-161e316c8da7 eTag: 12584605701013885103 source: Fabric] Jul 2 09:04:39.201019 waagent[1865]: 2024-07-02T09:04:39.200978Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 09:04:39.201649 waagent[1865]: 2024-07-02T09:04:39.201603Z INFO ExtHandler Jul 2 09:04:39.201722 waagent[1865]: 2024-07-02T09:04:39.201692Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 09:04:39.205917 waagent[1865]: 2024-07-02T09:04:39.205875Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 09:04:39.291566 waagent[1865]: 2024-07-02T09:04:39.291458Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC4C3FB9DE5A1235ECCAD0CCEFF1DD5FC42E55A4', 'hasPrivateKey': True} Jul 2 09:04:39.292046 waagent[1865]: 2024-07-02T09:04:39.291997Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A136EF5158D8A20F1D869C3EC2590227ED8A13F7', 'hasPrivateKey': False} Jul 2 09:04:39.292573 waagent[1865]: 2024-07-02T09:04:39.292522Z INFO ExtHandler Fetch goal state completed Jul 2 09:04:39.310260 waagent[1865]: 2024-07-02T09:04:39.310185Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1865 Jul 2 09:04:39.310459 waagent[1865]: 2024-07-02T09:04:39.310415Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 2 09:04:39.312194 waagent[1865]: 2024-07-02T09:04:39.312136Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 09:04:39.312634 waagent[1865]: 2024-07-02T09:04:39.312590Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 09:04:39.337900 waagent[1865]: 2024-07-02T09:04:39.337851Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 09:04:39.341116 waagent[1865]: 2024-07-02T09:04:39.341040Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 09:04:39.348099 waagent[1865]: 2024-07-02T09:04:39.348038Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 09:04:39.355435 systemd[1]: Reloading requested from client PID 1884 ('systemctl') (unit waagent.service)... Jul 2 09:04:39.355450 systemd[1]: Reloading... Jul 2 09:04:39.436431 zram_generator::config[1913]: No configuration found. Jul 2 09:04:39.550848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:04:39.625958 systemd[1]: Reloading finished in 270 ms. Jul 2 09:04:39.648135 waagent[1865]: 2024-07-02T09:04:39.647735Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 2 09:04:39.654778 systemd[1]: Reloading requested from client PID 1969 ('systemctl') (unit waagent.service)... Jul 2 09:04:39.654817 systemd[1]: Reloading... Jul 2 09:04:39.734584 zram_generator::config[1998]: No configuration found. Jul 2 09:04:39.843843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:04:39.917958 systemd[1]: Reloading finished in 262 ms. Jul 2 09:04:39.945832 waagent[1865]: 2024-07-02T09:04:39.944814Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 2 09:04:39.945832 waagent[1865]: 2024-07-02T09:04:39.945011Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 2 09:04:40.338288 waagent[1865]: 2024-07-02T09:04:40.337931Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 09:04:40.339067 waagent[1865]: 2024-07-02T09:04:40.338646Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 09:04:40.339602 waagent[1865]: 2024-07-02T09:04:40.339487Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 09:04:40.340115 waagent[1865]: 2024-07-02T09:04:40.339938Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 09:04:40.340465 waagent[1865]: 2024-07-02T09:04:40.340339Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 09:04:40.341311 waagent[1865]: 2024-07-02T09:04:40.340535Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 09:04:40.341311 waagent[1865]: 2024-07-02T09:04:40.340634Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 09:04:40.341311 waagent[1865]: 2024-07-02T09:04:40.340786Z INFO EnvHandler ExtHandler Configure routes Jul 2 09:04:40.341311 waagent[1865]: 2024-07-02T09:04:40.340847Z INFO EnvHandler ExtHandler Gateway:None Jul 2 09:04:40.341311 waagent[1865]: 2024-07-02T09:04:40.340896Z INFO EnvHandler ExtHandler Routes:None Jul 2 09:04:40.341709 waagent[1865]: 2024-07-02T09:04:40.341645Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 09:04:40.341971 waagent[1865]: 2024-07-02T09:04:40.341731Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 09:04:40.342555 waagent[1865]: 2024-07-02T09:04:40.342484Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 09:04:40.342606 waagent[1865]: 2024-07-02T09:04:40.342568Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 09:04:40.342796 waagent[1865]: 2024-07-02T09:04:40.342703Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 09:04:40.343239 waagent[1865]: 2024-07-02T09:04:40.343146Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 09:04:40.344112 waagent[1865]: 2024-07-02T09:04:40.343761Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 09:04:40.344494 waagent[1865]: 2024-07-02T09:04:40.344422Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 09:04:40.344494 waagent[1865]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 09:04:40.344494 waagent[1865]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 09:04:40.344494 waagent[1865]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 09:04:40.344494 waagent[1865]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 09:04:40.344494 waagent[1865]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 09:04:40.344494 waagent[1865]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 09:04:40.355030 waagent[1865]: 2024-07-02T09:04:40.353943Z INFO ExtHandler ExtHandler Jul 2 09:04:40.355030 waagent[1865]: 2024-07-02T09:04:40.354081Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f686f25b-58a9-43e9-9750-dc8920bb3500 correlation ae65895e-cce5-4513-9024-5865eb0c784f created: 2024-07-02T09:03:22.943817Z] Jul 2 09:04:40.355030 waagent[1865]: 2024-07-02T09:04:40.354595Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 09:04:40.357584 waagent[1865]: 2024-07-02T09:04:40.357519Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jul 2 09:04:40.389088 waagent[1865]: 2024-07-02T09:04:40.388985Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 09:04:40.389088 waagent[1865]: Executing ['ip', '-a', '-o', 'link']: Jul 2 09:04:40.389088 waagent[1865]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 09:04:40.389088 waagent[1865]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:8d:48 brd ff:ff:ff:ff:ff:ff Jul 2 09:04:40.389088 waagent[1865]: 3: enP49399s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:8d:48 brd ff:ff:ff:ff:ff:ff\ altname enP49399p0s2 Jul 2 09:04:40.389088 waagent[1865]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 09:04:40.389088 waagent[1865]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 09:04:40.389088 waagent[1865]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 09:04:40.389088 waagent[1865]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 09:04:40.389088 waagent[1865]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 2 09:04:40.389088 waagent[1865]: 2: eth0 inet6 fe80::222:48ff:feb8:8d48/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 09:04:40.389088 waagent[1865]: 3: enP49399s1 inet6 fe80::222:48ff:feb8:8d48/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 09:04:40.411938 waagent[1865]: 2024-07-02T09:04:40.411850Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CFDD0827-88D4-446D-872C-50A31414716A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 2 09:04:40.472868 waagent[1865]: 2024-07-02T09:04:40.472768Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 2 09:04:40.472868 waagent[1865]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 09:04:40.472868 waagent[1865]: pkts bytes target prot opt in out source destination Jul 2 09:04:40.472868 waagent[1865]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 09:04:40.472868 waagent[1865]: pkts bytes target prot opt in out source destination Jul 2 09:04:40.472868 waagent[1865]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 09:04:40.472868 waagent[1865]: pkts bytes target prot opt in out source destination Jul 2 09:04:40.472868 waagent[1865]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 09:04:40.472868 waagent[1865]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 09:04:40.472868 waagent[1865]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 09:04:40.476467 waagent[1865]: 2024-07-02T09:04:40.476377Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 09:04:40.476467 waagent[1865]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 09:04:40.476467 waagent[1865]: pkts bytes target prot opt in out source destination Jul 2 09:04:40.476467 waagent[1865]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 09:04:40.476467 waagent[1865]: pkts bytes target prot opt in out source destination Jul 2 09:04:40.476467 waagent[1865]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 09:04:40.476467 waagent[1865]: pkts bytes target prot opt in out source destination Jul 2 09:04:40.476467 waagent[1865]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 09:04:40.476467 waagent[1865]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 09:04:40.476467 waagent[1865]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 09:04:40.476750 waagent[1865]: 2024-07-02T09:04:40.476709Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 09:04:44.490832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 09:04:44.498568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:04:44.594060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:04:44.613682 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:04:44.660969 kubelet[2093]: E0702 09:04:44.660882 2093 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:04:44.664950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:04:44.665079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:04:54.741020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 09:04:54.749574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:04:55.041340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:04:55.054676 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:04:55.099503 kubelet[2109]: E0702 09:04:55.099448 2109 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:04:55.102638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:04:55.102868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:04:56.321412 chronyd[1649]: Selected source PHC0 Jul 2 09:04:59.630652 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 09:04:59.640668 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:37400.service - OpenSSH per-connection server daemon (10.200.16.10:37400). Jul 2 09:05:00.156433 sshd[2119]: Accepted publickey for core from 10.200.16.10 port 37400 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:05:00.157895 sshd[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:05:00.162420 systemd-logind[1665]: New session 3 of user core. Jul 2 09:05:00.171551 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 09:05:00.562500 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:37404.service - OpenSSH per-connection server daemon (10.200.16.10:37404). Jul 2 09:05:00.971680 sshd[2124]: Accepted publickey for core from 10.200.16.10 port 37404 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:05:00.973827 sshd[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:05:00.979152 systemd-logind[1665]: New session 4 of user core. Jul 2 09:05:00.986589 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 09:05:01.273118 sshd[2124]: pam_unix(sshd:session): session closed for user core Jul 2 09:05:01.277552 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:37404.service: Deactivated successfully. Jul 2 09:05:01.279183 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 09:05:01.280668 systemd-logind[1665]: Session 4 logged out. Waiting for processes to exit. Jul 2 09:05:01.281844 systemd-logind[1665]: Removed session 4. Jul 2 09:05:01.350569 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:37406.service - OpenSSH per-connection server daemon (10.200.16.10:37406). Jul 2 09:05:01.763741 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 37406 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:05:01.765208 sshd[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:05:01.770422 systemd-logind[1665]: New session 5 of user core. Jul 2 09:05:01.776563 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 09:05:02.064059 sshd[2131]: pam_unix(sshd:session): session closed for user core Jul 2 09:05:02.068691 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:37406.service: Deactivated successfully. Jul 2 09:05:02.070434 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 09:05:02.071287 systemd-logind[1665]: Session 5 logged out. Waiting for processes to exit. Jul 2 09:05:02.072286 systemd-logind[1665]: Removed session 5. Jul 2 09:05:02.148658 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:37410.service - OpenSSH per-connection server daemon (10.200.16.10:37410). Jul 2 09:05:02.557707 sshd[2138]: Accepted publickey for core from 10.200.16.10 port 37410 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:05:02.559133 sshd[2138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:05:02.563024 systemd-logind[1665]: New session 6 of user core. Jul 2 09:05:02.572564 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 09:05:02.862457 sshd[2138]: pam_unix(sshd:session): session closed for user core Jul 2 09:05:02.865827 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 09:05:02.866649 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:37410.service: Deactivated successfully. Jul 2 09:05:02.870101 systemd-logind[1665]: Session 6 logged out. Waiting for processes to exit. Jul 2 09:05:02.871050 systemd-logind[1665]: Removed session 6. Jul 2 09:05:02.943440 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:37414.service - OpenSSH per-connection server daemon (10.200.16.10:37414). Jul 2 09:05:03.394880 sshd[2145]: Accepted publickey for core from 10.200.16.10 port 37414 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:05:03.396293 sshd[2145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:05:03.401600 systemd-logind[1665]: New session 7 of user core. Jul 2 09:05:03.407566 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 09:05:03.774170 sudo[2148]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 09:05:03.774439 sudo[2148]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:05:03.808155 sudo[2148]: pam_unix(sudo:session): session closed for user root Jul 2 09:05:03.890343 sshd[2145]: pam_unix(sshd:session): session closed for user core Jul 2 09:05:03.894673 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:37414.service: Deactivated successfully. Jul 2 09:05:03.896418 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 09:05:03.897210 systemd-logind[1665]: Session 7 logged out. Waiting for processes to exit. Jul 2 09:05:03.899068 systemd-logind[1665]: Removed session 7. Jul 2 09:05:03.968236 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:37424.service - OpenSSH per-connection server daemon (10.200.16.10:37424). Jul 2 09:05:04.377742 sshd[2153]: Accepted publickey for core from 10.200.16.10 port 37424 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:05:04.379284 sshd[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:05:04.383411 systemd-logind[1665]: New session 8 of user core. Jul 2 09:05:04.393532 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 09:05:04.614501 sudo[2157]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 09:05:04.614756 sudo[2157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:05:04.618386 sudo[2157]: pam_unix(sudo:session): session closed for user root Jul 2 09:05:04.623578 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 09:05:04.623823 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:05:04.640889 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 09:05:04.642594 auditctl[2160]: No rules Jul 2 09:05:04.642944 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 09:05:04.643135 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 09:05:04.646001 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:05:04.672333 augenrules[2178]: No rules Jul 2 09:05:04.674443 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:05:04.676100 sudo[2156]: pam_unix(sudo:session): session closed for user root Jul 2 09:05:04.742593 sshd[2153]: pam_unix(sshd:session): session closed for user core Jul 2 09:05:04.745513 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:37424.service: Deactivated successfully. Jul 2 09:05:04.747318 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 09:05:04.748816 systemd-logind[1665]: Session 8 logged out. Waiting for processes to exit. Jul 2 09:05:04.750169 systemd-logind[1665]: Removed session 8. Jul 2 09:05:04.817964 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:37426.service - OpenSSH per-connection server daemon (10.200.16.10:37426). Jul 2 09:05:05.154510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 09:05:05.161569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:05.229072 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 37426 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:05:05.231065 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:05:05.237256 systemd-logind[1665]: New session 9 of user core. Jul 2 09:05:05.244547 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 09:05:05.320306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:05.334707 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:05:05.383220 kubelet[2197]: E0702 09:05:05.383156 2197 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:05:05.385627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:05:05.385756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:05:05.466309 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 09:05:05.467038 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:05:06.110701 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 09:05:06.110845 (dockerd)[2214]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 09:05:08.452389 dockerd[2214]: time="2024-07-02T09:05:08.452030421Z" level=info msg="Starting up" Jul 2 09:05:10.072615 dockerd[2214]: time="2024-07-02T09:05:10.072541321Z" level=info msg="Loading containers: start." Jul 2 09:05:10.334518 kernel: Initializing XFRM netlink socket Jul 2 09:05:10.474045 systemd-networkd[1324]: docker0: Link UP Jul 2 09:05:10.576852 dockerd[2214]: time="2024-07-02T09:05:10.576811979Z" level=info msg="Loading containers: done." Jul 2 09:05:11.309169 dockerd[2214]: time="2024-07-02T09:05:11.309103389Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 09:05:11.309576 dockerd[2214]: time="2024-07-02T09:05:11.309328149Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 09:05:11.309576 dockerd[2214]: time="2024-07-02T09:05:11.309482389Z" level=info msg="Daemon has completed initialization" Jul 2 09:05:11.486081 dockerd[2214]: time="2024-07-02T09:05:11.485500101Z" level=info msg="API listen on /run/docker.sock" Jul 2 09:05:11.485731 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 09:05:13.295859 containerd[1697]: time="2024-07-02T09:05:13.295745742Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 09:05:15.334515 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 2 09:05:15.383320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432870571.mount: Deactivated successfully. Jul 2 09:05:15.490799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 09:05:15.501590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:17.859553 update_engine[1668]: I0702 09:05:17.561246 1668 update_attempter.cc:509] Updating boot flags... Jul 2 09:05:18.208446 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2363) Jul 2 09:05:18.461960 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2366) Jul 2 09:05:18.543393 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2366) Jul 2 09:05:18.688640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:18.693787 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:05:18.744257 kubelet[2450]: E0702 09:05:18.744201 2450 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:05:18.747341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:05:18.747610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:05:22.355389 containerd[1697]: time="2024-07-02T09:05:22.355257504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:22.357503 containerd[1697]: time="2024-07-02T09:05:22.357464629Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256347" Jul 2 09:05:22.361166 containerd[1697]: time="2024-07-02T09:05:22.361116958Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:22.365404 containerd[1697]: time="2024-07-02T09:05:22.365272208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:22.366533 containerd[1697]: time="2024-07-02T09:05:22.366325491Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 9.070538269s" Jul 2 09:05:22.366533 containerd[1697]: time="2024-07-02T09:05:22.366394011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 09:05:22.390699 containerd[1697]: time="2024-07-02T09:05:22.390623589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 09:05:24.556156 containerd[1697]: time="2024-07-02T09:05:24.556098971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:24.561869 containerd[1697]: time="2024-07-02T09:05:24.561828265Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228084" Jul 2 09:05:24.567192 containerd[1697]: time="2024-07-02T09:05:24.567138557Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:24.573932 containerd[1697]: time="2024-07-02T09:05:24.573856854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:24.575161 containerd[1697]: time="2024-07-02T09:05:24.575022456Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 2.184352867s" Jul 2 09:05:24.575161 containerd[1697]: time="2024-07-02T09:05:24.575068977Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 09:05:24.596651 containerd[1697]: time="2024-07-02T09:05:24.596603069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 09:05:25.592400 containerd[1697]: time="2024-07-02T09:05:25.592142509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:25.594186 containerd[1697]: time="2024-07-02T09:05:25.594056474Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578348" Jul 2 09:05:25.597040 containerd[1697]: time="2024-07-02T09:05:25.596984681Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:25.601856 containerd[1697]: time="2024-07-02T09:05:25.601794972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:25.603101 containerd[1697]: time="2024-07-02T09:05:25.602982455Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.006333906s" Jul 2 09:05:25.603101 containerd[1697]: time="2024-07-02T09:05:25.603019775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 09:05:25.624161 containerd[1697]: time="2024-07-02T09:05:25.624113986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 09:05:26.982471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164941317.mount: Deactivated successfully. Jul 2 09:05:27.596784 containerd[1697]: time="2024-07-02T09:05:27.596735035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:27.600178 containerd[1697]: time="2024-07-02T09:05:27.600130001Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052710" Jul 2 09:05:27.603565 containerd[1697]: time="2024-07-02T09:05:27.603508487Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:27.607378 containerd[1697]: time="2024-07-02T09:05:27.607292974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:27.608044 containerd[1697]: time="2024-07-02T09:05:27.607911615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.983755669s" Jul 2 09:05:27.608044 containerd[1697]: time="2024-07-02T09:05:27.607949815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 09:05:27.628929 containerd[1697]: time="2024-07-02T09:05:27.628856933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 09:05:28.322416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185251010.mount: Deactivated successfully. Jul 2 09:05:28.991087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 09:05:29.000976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:29.126387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:29.136754 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:05:29.196146 kubelet[2593]: E0702 09:05:29.196000 2593 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:05:29.198871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:05:29.199033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:05:31.266499 containerd[1697]: time="2024-07-02T09:05:31.266429708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:31.269704 containerd[1697]: time="2024-07-02T09:05:31.269562593Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 09:05:31.278610 containerd[1697]: time="2024-07-02T09:05:31.278523690Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:31.283311 containerd[1697]: time="2024-07-02T09:05:31.283235619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:31.284646 containerd[1697]: time="2024-07-02T09:05:31.284473622Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 3.655576129s" Jul 2 09:05:31.284646 containerd[1697]: time="2024-07-02T09:05:31.284531182Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 09:05:31.306794 containerd[1697]: time="2024-07-02T09:05:31.306734904Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 09:05:32.727305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102278331.mount: Deactivated successfully. Jul 2 09:05:32.956165 containerd[1697]: time="2024-07-02T09:05:32.955342913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:32.957405 containerd[1697]: time="2024-07-02T09:05:32.957344557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 09:05:33.019129 containerd[1697]: time="2024-07-02T09:05:33.019064434Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:33.064926 containerd[1697]: time="2024-07-02T09:05:33.064858961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:33.066209 containerd[1697]: time="2024-07-02T09:05:33.065617443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.758835499s" Jul 2 09:05:33.066209 containerd[1697]: time="2024-07-02T09:05:33.065657603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 09:05:33.086898 containerd[1697]: time="2024-07-02T09:05:33.086834483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 09:05:34.924111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925307463.mount: Deactivated successfully. Jul 2 09:05:39.240845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 09:05:39.252567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:39.342548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:39.352737 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:05:39.395846 kubelet[2635]: E0702 09:05:39.395751 2635 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:05:39.398576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:05:39.398705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:05:47.226306 containerd[1697]: time="2024-07-02T09:05:47.226239603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:47.229199 containerd[1697]: time="2024-07-02T09:05:47.229153734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jul 2 09:05:47.233750 containerd[1697]: time="2024-07-02T09:05:47.233693910Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:47.238783 containerd[1697]: time="2024-07-02T09:05:47.238719249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:05:47.239983 containerd[1697]: time="2024-07-02T09:05:47.239798053Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 14.15288433s" Jul 2 09:05:47.239983 containerd[1697]: time="2024-07-02T09:05:47.239840293Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 09:05:49.490936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 2 09:05:49.501006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:49.680511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:49.691768 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:05:49.740130 kubelet[2742]: E0702 09:05:49.740080 2742 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:05:49.744041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:05:49.744166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:05:53.037256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:53.047691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:53.071951 systemd[1]: Reloading requested from client PID 2756 ('systemctl') (unit session-9.scope)... Jul 2 09:05:53.072113 systemd[1]: Reloading... Jul 2 09:05:53.194389 zram_generator::config[2793]: No configuration found. Jul 2 09:05:53.312495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:05:53.389193 systemd[1]: Reloading finished in 316 ms. Jul 2 09:05:53.446426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:53.450894 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:53.453080 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:05:53.453299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:53.459690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:05:53.566042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:05:53.576923 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:05:53.624780 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:05:53.625150 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:05:53.625193 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:05:53.625342 kubelet[2862]: I0702 09:05:53.625297 2862 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:05:54.285137 kubelet[2862]: I0702 09:05:54.285093 2862 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 09:05:54.285332 kubelet[2862]: I0702 09:05:54.285320 2862 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:05:54.285680 kubelet[2862]: I0702 09:05:54.285661 2862 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 09:05:54.300439 kubelet[2862]: I0702 09:05:54.300395 2862 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:05:54.300800 kubelet[2862]: E0702 09:05:54.300765 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.314018 kubelet[2862]: I0702 09:05:54.313972 2862 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:05:54.314252 kubelet[2862]: I0702 09:05:54.314227 2862 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:05:54.314481 kubelet[2862]: I0702 09:05:54.314450 2862 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:05:54.314481 kubelet[2862]: I0702 09:05:54.314479 2862 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:05:54.314606 kubelet[2862]: I0702 09:05:54.314488 2862 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:05:54.316160 kubelet[2862]: I0702 09:05:54.316115 2862 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:05:54.319563 kubelet[2862]: I0702 09:05:54.319532 2862 kubelet.go:396] "Attempting to sync node with API server" Jul 2 09:05:54.319611 kubelet[2862]: I0702 09:05:54.319572 2862 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:05:54.319611 kubelet[2862]: I0702 09:05:54.319600 2862 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:05:54.319656 kubelet[2862]: I0702 09:05:54.319616 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:05:54.322509 kubelet[2862]: I0702 09:05:54.322455 2862 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:05:54.322897 kubelet[2862]: I0702 09:05:54.322867 2862 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:05:54.323673 kubelet[2862]: W0702 09:05:54.323639 2862 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 09:05:54.326342 kubelet[2862]: I0702 09:05:54.326306 2862 server.go:1256] "Started kubelet" Jul 2 09:05:54.326560 kubelet[2862]: W0702 09:05:54.326516 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.326595 kubelet[2862]: E0702 09:05:54.326568 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.329319 kubelet[2862]: W0702 09:05:54.328017 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-59f2e70dce&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.329319 kubelet[2862]: E0702 09:05:54.328071 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-59f2e70dce&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.329319 kubelet[2862]: I0702 09:05:54.328133 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:05:54.329319 kubelet[2862]: I0702 09:05:54.328197 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:05:54.329319 kubelet[2862]: I0702 09:05:54.328413 2862 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:05:54.329319 kubelet[2862]: I0702 09:05:54.328474 2862 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:05:54.329319 kubelet[2862]: I0702 09:05:54.329207 2862 server.go:461] "Adding debug handlers to kubelet server" Jul 2 09:05:54.335332 kubelet[2862]: I0702 09:05:54.334788 2862 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:05:54.336455 kubelet[2862]: E0702 09:05:54.336424 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-59f2e70dce?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Jul 2 09:05:54.337638 kubelet[2862]: E0702 09:05:54.337583 2862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-a-59f2e70dce.17de5a1516d98c57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-59f2e70dce,UID:ci-3975.1.1-a-59f2e70dce,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-59f2e70dce,},FirstTimestamp:2024-07-02 09:05:54.326277207 +0000 UTC m=+0.745418273,LastTimestamp:2024-07-02 09:05:54.326277207 +0000 UTC m=+0.745418273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-59f2e70dce,}" Jul 2 09:05:54.338136 kubelet[2862]: I0702 09:05:54.337846 2862 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:05:54.338136 kubelet[2862]: I0702 09:05:54.337986 2862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:05:54.338839 kubelet[2862]: I0702 09:05:54.338813 2862 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:05:54.340567 kubelet[2862]: I0702 09:05:54.340545 2862 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:05:54.342347 kubelet[2862]: W0702 09:05:54.342282 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.342347 kubelet[2862]: E0702 09:05:54.342345 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.344488 kubelet[2862]: E0702 09:05:54.344207 2862 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:05:54.345121 kubelet[2862]: I0702 09:05:54.345077 2862 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:05:54.383557 kubelet[2862]: I0702 09:05:54.383528 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:05:54.384980 kubelet[2862]: I0702 09:05:54.384955 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:05:54.385107 kubelet[2862]: I0702 09:05:54.385097 2862 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:05:54.385174 kubelet[2862]: I0702 09:05:54.385166 2862 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 09:05:54.385279 kubelet[2862]: E0702 09:05:54.385264 2862 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:05:54.387060 kubelet[2862]: W0702 09:05:54.387017 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.388156 kubelet[2862]: E0702 09:05:54.388115 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:54.460973 kubelet[2862]: I0702 09:05:54.460931 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.461561 kubelet[2862]: E0702 09:05:54.461527 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.462066 kubelet[2862]: I0702 09:05:54.461783 2862 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:05:54.462066 kubelet[2862]: I0702 09:05:54.461800 2862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:05:54.462066 kubelet[2862]: I0702 09:05:54.461818 2862 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:05:54.471279 kubelet[2862]: I0702 09:05:54.471237 2862 policy_none.go:49] "None policy: Start" Jul 2 09:05:54.472151 kubelet[2862]: I0702 09:05:54.472082 2862 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:05:54.472151 kubelet[2862]: I0702 09:05:54.472134 2862 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:05:54.479944 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 09:05:54.485960 kubelet[2862]: E0702 09:05:54.485915 2862 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 09:05:54.497635 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 09:05:54.509342 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 09:05:54.511790 kubelet[2862]: I0702 09:05:54.510800 2862 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:05:54.511790 kubelet[2862]: I0702 09:05:54.511070 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:05:54.513085 kubelet[2862]: E0702 09:05:54.513060 2862 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-a-59f2e70dce\" not found" Jul 2 09:05:54.536975 kubelet[2862]: E0702 09:05:54.536846 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-59f2e70dce?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Jul 2 09:05:54.664275 kubelet[2862]: I0702 09:05:54.664238 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.664769 kubelet[2862]: E0702 09:05:54.664658 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.686832 kubelet[2862]: I0702 09:05:54.686798 2862 topology_manager.go:215] "Topology Admit Handler" podUID="48980a00345c16b4dcdd036bfda2807a" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.688943 kubelet[2862]: I0702 09:05:54.688904 2862 topology_manager.go:215] "Topology Admit Handler" podUID="47264e6b6514801b614a0cb8bef863a2" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.691210 kubelet[2862]: I0702 09:05:54.690920 2862 topology_manager.go:215] "Topology Admit Handler" podUID="e9b30fd8544e9423d640c13021120caa" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.700462 systemd[1]: Created slice kubepods-burstable-pod48980a00345c16b4dcdd036bfda2807a.slice - libcontainer container kubepods-burstable-pod48980a00345c16b4dcdd036bfda2807a.slice. Jul 2 09:05:54.717735 systemd[1]: Created slice kubepods-burstable-pode9b30fd8544e9423d640c13021120caa.slice - libcontainer container kubepods-burstable-pode9b30fd8544e9423d640c13021120caa.slice. Jul 2 09:05:54.721421 systemd[1]: Created slice kubepods-burstable-pod47264e6b6514801b614a0cb8bef863a2.slice - libcontainer container kubepods-burstable-pod47264e6b6514801b614a0cb8bef863a2.slice. Jul 2 09:05:54.742046 kubelet[2862]: I0702 09:05:54.741969 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48980a00345c16b4dcdd036bfda2807a-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" (UID: \"48980a00345c16b4dcdd036bfda2807a\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742210 kubelet[2862]: I0702 09:05:54.742070 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742210 kubelet[2862]: I0702 09:05:54.742101 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742210 kubelet[2862]: I0702 09:05:54.742143 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9b30fd8544e9423d640c13021120caa-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-59f2e70dce\" (UID: \"e9b30fd8544e9423d640c13021120caa\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742210 kubelet[2862]: I0702 09:05:54.742167 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48980a00345c16b4dcdd036bfda2807a-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" (UID: \"48980a00345c16b4dcdd036bfda2807a\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742320 kubelet[2862]: I0702 09:05:54.742221 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48980a00345c16b4dcdd036bfda2807a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" (UID: \"48980a00345c16b4dcdd036bfda2807a\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742320 kubelet[2862]: I0702 09:05:54.742245 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742320 kubelet[2862]: I0702 09:05:54.742288 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.742320 kubelet[2862]: I0702 09:05:54.742313 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:54.854034 kubelet[2862]: E0702 09:05:54.853908 2862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-a-59f2e70dce.17de5a1516d98c57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-59f2e70dce,UID:ci-3975.1.1-a-59f2e70dce,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-59f2e70dce,},FirstTimestamp:2024-07-02 09:05:54.326277207 +0000 UTC m=+0.745418273,LastTimestamp:2024-07-02 09:05:54.326277207 +0000 UTC m=+0.745418273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-59f2e70dce,}" Jul 2 09:05:54.938350 kubelet[2862]: E0702 09:05:54.938309 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-59f2e70dce?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Jul 2 09:05:55.014764 containerd[1697]: time="2024-07-02T09:05:55.014669796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-59f2e70dce,Uid:48980a00345c16b4dcdd036bfda2807a,Namespace:kube-system,Attempt:0,}" Jul 2 09:05:55.021343 containerd[1697]: time="2024-07-02T09:05:55.020957320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-59f2e70dce,Uid:e9b30fd8544e9423d640c13021120caa,Namespace:kube-system,Attempt:0,}" Jul 2 09:05:55.025606 containerd[1697]: time="2024-07-02T09:05:55.025274163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-59f2e70dce,Uid:47264e6b6514801b614a0cb8bef863a2,Namespace:kube-system,Attempt:0,}" Jul 2 09:05:55.067214 kubelet[2862]: I0702 09:05:55.067177 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:55.067641 kubelet[2862]: E0702 09:05:55.067613 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:55.473428 kubelet[2862]: W0702 09:05:55.473338 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.473428 kubelet[2862]: E0702 09:05:55.473445 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.590927 kubelet[2862]: W0702 09:05:55.590887 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.590927 kubelet[2862]: E0702 09:05:55.590930 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.608457 kubelet[2862]: W0702 09:05:55.608332 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.608457 kubelet[2862]: E0702 09:05:55.608431 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.621970 kubelet[2862]: W0702 09:05:55.621875 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-59f2e70dce&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.621970 kubelet[2862]: E0702 09:05:55.621933 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-59f2e70dce&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:55.650821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756194461.mount: Deactivated successfully. Jul 2 09:05:55.677369 containerd[1697]: time="2024-07-02T09:05:55.677297886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:05:55.679235 containerd[1697]: time="2024-07-02T09:05:55.679141607Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 09:05:55.682141 containerd[1697]: time="2024-07-02T09:05:55.681345009Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:05:55.684126 containerd[1697]: time="2024-07-02T09:05:55.684068331Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:05:55.686010 containerd[1697]: time="2024-07-02T09:05:55.685961292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:05:55.689232 containerd[1697]: time="2024-07-02T09:05:55.689180894Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:05:55.691670 containerd[1697]: time="2024-07-02T09:05:55.691597896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:05:55.695944 containerd[1697]: time="2024-07-02T09:05:55.695877579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:05:55.696897 containerd[1697]: time="2024-07-02T09:05:55.696630620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 675.56974ms" Jul 2 09:05:55.699632 containerd[1697]: time="2024-07-02T09:05:55.699581782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 674.158738ms" Jul 2 09:05:55.699797 containerd[1697]: time="2024-07-02T09:05:55.699768542Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 684.985146ms" Jul 2 09:05:55.739265 kubelet[2862]: E0702 09:05:55.739151 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-59f2e70dce?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Jul 2 09:05:55.870598 kubelet[2862]: I0702 09:05:55.870227 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:55.870758 kubelet[2862]: E0702 09:05:55.870610 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:56.238380 containerd[1697]: time="2024-07-02T09:05:56.238036900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:05:56.238380 containerd[1697]: time="2024-07-02T09:05:56.238111620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:05:56.238380 containerd[1697]: time="2024-07-02T09:05:56.238128260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:05:56.238380 containerd[1697]: time="2024-07-02T09:05:56.238138540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:05:56.240786 containerd[1697]: time="2024-07-02T09:05:56.237922540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:05:56.240786 containerd[1697]: time="2024-07-02T09:05:56.240416062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:05:56.240786 containerd[1697]: time="2024-07-02T09:05:56.240433342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:05:56.240786 containerd[1697]: time="2024-07-02T09:05:56.240453782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:05:56.242386 containerd[1697]: time="2024-07-02T09:05:56.242258503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:05:56.242503 containerd[1697]: time="2024-07-02T09:05:56.242440343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:05:56.242503 containerd[1697]: time="2024-07-02T09:05:56.242490463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:05:56.243146 containerd[1697]: time="2024-07-02T09:05:56.242908184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:05:56.263998 systemd[1]: Started cri-containerd-b21de8fe1859f9534ff471aee76eb68ca510d86b4e89650b1718146fb34d27c5.scope - libcontainer container b21de8fe1859f9534ff471aee76eb68ca510d86b4e89650b1718146fb34d27c5. Jul 2 09:05:56.273627 systemd[1]: Started cri-containerd-872cffa69672b9aa25594fb65f603e848db9250e3d12025ec3b1fb353a98e494.scope - libcontainer container 872cffa69672b9aa25594fb65f603e848db9250e3d12025ec3b1fb353a98e494. Jul 2 09:05:56.275541 systemd[1]: Started cri-containerd-ef88f72a1163d5498c0b6ff29cababded0ef12ae1f612b787e808af87d372653.scope - libcontainer container ef88f72a1163d5498c0b6ff29cababded0ef12ae1f612b787e808af87d372653. Jul 2 09:05:56.327381 containerd[1697]: time="2024-07-02T09:05:56.326772286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-59f2e70dce,Uid:e9b30fd8544e9423d640c13021120caa,Namespace:kube-system,Attempt:0,} returns sandbox id \"872cffa69672b9aa25594fb65f603e848db9250e3d12025ec3b1fb353a98e494\"" Jul 2 09:05:56.330867 containerd[1697]: time="2024-07-02T09:05:56.330825369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-59f2e70dce,Uid:48980a00345c16b4dcdd036bfda2807a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef88f72a1163d5498c0b6ff29cababded0ef12ae1f612b787e808af87d372653\"" Jul 2 09:05:56.335136 containerd[1697]: time="2024-07-02T09:05:56.334931412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-59f2e70dce,Uid:47264e6b6514801b614a0cb8bef863a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b21de8fe1859f9534ff471aee76eb68ca510d86b4e89650b1718146fb34d27c5\"" Jul 2 09:05:56.336729 containerd[1697]: time="2024-07-02T09:05:56.336593813Z" level=info msg="CreateContainer within sandbox \"872cffa69672b9aa25594fb65f603e848db9250e3d12025ec3b1fb353a98e494\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 09:05:56.338404 containerd[1697]: time="2024-07-02T09:05:56.338036214Z" level=info msg="CreateContainer within sandbox \"ef88f72a1163d5498c0b6ff29cababded0ef12ae1f612b787e808af87d372653\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 09:05:56.340890 containerd[1697]: time="2024-07-02T09:05:56.340737616Z" level=info msg="CreateContainer within sandbox \"b21de8fe1859f9534ff471aee76eb68ca510d86b4e89650b1718146fb34d27c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 09:05:56.401816 containerd[1697]: time="2024-07-02T09:05:56.401760221Z" level=info msg="CreateContainer within sandbox \"ef88f72a1163d5498c0b6ff29cababded0ef12ae1f612b787e808af87d372653\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"186d3b63a86643e0b8364cfd5471e8a4f2628500b55f230f249db97c47d1d65a\"" Jul 2 09:05:56.402618 containerd[1697]: time="2024-07-02T09:05:56.402577142Z" level=info msg="StartContainer for \"186d3b63a86643e0b8364cfd5471e8a4f2628500b55f230f249db97c47d1d65a\"" Jul 2 09:05:56.405890 containerd[1697]: time="2024-07-02T09:05:56.405828224Z" level=info msg="CreateContainer within sandbox \"b21de8fe1859f9534ff471aee76eb68ca510d86b4e89650b1718146fb34d27c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030\"" Jul 2 09:05:56.406862 containerd[1697]: time="2024-07-02T09:05:56.406810785Z" level=info msg="CreateContainer within sandbox \"872cffa69672b9aa25594fb65f603e848db9250e3d12025ec3b1fb353a98e494\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d\"" Jul 2 09:05:56.407347 containerd[1697]: time="2024-07-02T09:05:56.407311265Z" level=info msg="StartContainer for \"5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d\"" Jul 2 09:05:56.409219 containerd[1697]: time="2024-07-02T09:05:56.408555306Z" level=info msg="StartContainer for \"2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030\"" Jul 2 09:05:56.433619 systemd[1]: Started cri-containerd-186d3b63a86643e0b8364cfd5471e8a4f2628500b55f230f249db97c47d1d65a.scope - libcontainer container 186d3b63a86643e0b8364cfd5471e8a4f2628500b55f230f249db97c47d1d65a. Jul 2 09:05:56.451991 systemd[1]: Started cri-containerd-2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030.scope - libcontainer container 2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030. Jul 2 09:05:56.456598 systemd[1]: Started cri-containerd-5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d.scope - libcontainer container 5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d. Jul 2 09:05:56.496232 containerd[1697]: time="2024-07-02T09:05:56.496086611Z" level=info msg="StartContainer for \"186d3b63a86643e0b8364cfd5471e8a4f2628500b55f230f249db97c47d1d65a\" returns successfully" Jul 2 09:05:56.498875 kubelet[2862]: E0702 09:05:56.498843 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.37:6443: connect: connection refused Jul 2 09:05:56.519539 containerd[1697]: time="2024-07-02T09:05:56.519473988Z" level=info msg="StartContainer for \"2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030\" returns successfully" Jul 2 09:05:56.539071 containerd[1697]: time="2024-07-02T09:05:56.539013563Z" level=info msg="StartContainer for \"5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d\" returns successfully" Jul 2 09:05:57.472621 kubelet[2862]: I0702 09:05:57.472584 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:59.031642 kubelet[2862]: I0702 09:05:59.031593 2862 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:05:59.177138 kubelet[2862]: E0702 09:05:59.177096 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 2 09:05:59.324668 kubelet[2862]: I0702 09:05:59.324477 2862 apiserver.go:52] "Watching apiserver" Jul 2 09:05:59.341593 kubelet[2862]: I0702 09:05:59.341532 2862 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:05:59.424593 kubelet[2862]: E0702 09:05:59.423551 2862 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:00.875858 kubelet[2862]: W0702 09:06:00.875470 2862 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 09:06:01.294720 kubelet[2862]: W0702 09:06:01.294641 2862 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 09:06:01.685037 systemd[1]: Reloading requested from client PID 3135 ('systemctl') (unit session-9.scope)... Jul 2 09:06:01.685055 systemd[1]: Reloading... Jul 2 09:06:01.783473 zram_generator::config[3176]: No configuration found. Jul 2 09:06:01.887344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:06:01.974650 systemd[1]: Reloading finished in 289 ms. Jul 2 09:06:02.013569 kubelet[2862]: I0702 09:06:02.013525 2862 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:06:02.014342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:06:02.024542 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:06:02.024797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:06:02.024885 systemd[1]: kubelet.service: Consumed 1.126s CPU time, 111.9M memory peak, 0B memory swap peak. Jul 2 09:06:02.029963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:06:02.270680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:06:02.278375 (kubelet)[3236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:06:02.332787 kubelet[3236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:06:02.332787 kubelet[3236]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:06:02.332787 kubelet[3236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:06:02.333184 kubelet[3236]: I0702 09:06:02.332855 3236 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:06:02.339197 kubelet[3236]: I0702 09:06:02.338096 3236 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 09:06:02.339197 kubelet[3236]: I0702 09:06:02.338127 3236 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:06:02.339197 kubelet[3236]: I0702 09:06:02.338326 3236 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 09:06:02.340115 kubelet[3236]: I0702 09:06:02.340090 3236 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 09:06:02.342452 kubelet[3236]: I0702 09:06:02.342413 3236 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:06:02.356504 kubelet[3236]: I0702 09:06:02.356446 3236 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:06:02.356731 kubelet[3236]: I0702 09:06:02.356663 3236 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:06:02.356861 kubelet[3236]: I0702 09:06:02.356836 3236 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:06:02.356938 kubelet[3236]: I0702 09:06:02.356867 3236 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:06:02.356938 kubelet[3236]: I0702 09:06:02.356875 3236 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:06:02.356938 kubelet[3236]: I0702 09:06:02.356904 3236 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:06:02.357028 kubelet[3236]: I0702 09:06:02.357004 3236 kubelet.go:396] "Attempting to sync node with API server" Jul 2 09:06:02.357028 kubelet[3236]: I0702 09:06:02.357017 3236 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:06:02.357794 kubelet[3236]: I0702 09:06:02.357036 3236 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:06:02.357794 kubelet[3236]: I0702 09:06:02.357052 3236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:06:02.358257 kubelet[3236]: I0702 09:06:02.358227 3236 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:06:02.358564 kubelet[3236]: I0702 09:06:02.358539 3236 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:06:02.359007 kubelet[3236]: I0702 09:06:02.358973 3236 server.go:1256] "Started kubelet" Jul 2 09:06:02.360883 kubelet[3236]: I0702 09:06:02.360847 3236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:06:02.366756 kubelet[3236]: I0702 09:06:02.366712 3236 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:06:02.367758 kubelet[3236]: I0702 09:06:02.367734 3236 server.go:461] "Adding debug handlers to kubelet server" Jul 2 09:06:02.369250 kubelet[3236]: I0702 09:06:02.368864 3236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:06:02.369250 kubelet[3236]: I0702 09:06:02.369076 3236 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:06:02.370290 kubelet[3236]: I0702 09:06:02.370248 3236 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:06:02.374410 kubelet[3236]: I0702 09:06:02.372542 3236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:06:02.375283 kubelet[3236]: I0702 09:06:02.373199 3236 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:06:02.377897 kubelet[3236]: I0702 09:06:02.373332 3236 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:06:02.378031 kubelet[3236]: I0702 09:06:02.375443 3236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:06:02.380504 kubelet[3236]: I0702 09:06:02.380485 3236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:06:02.381500 kubelet[3236]: I0702 09:06:02.380618 3236 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:06:02.381500 kubelet[3236]: I0702 09:06:02.380648 3236 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 09:06:02.381500 kubelet[3236]: E0702 09:06:02.380702 3236 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:06:02.383305 kubelet[3236]: E0702 09:06:02.383272 3236 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:06:02.394417 kubelet[3236]: I0702 09:06:02.393906 3236 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:06:02.394417 kubelet[3236]: I0702 09:06:02.393930 3236 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:06:02.457014 kubelet[3236]: I0702 09:06:02.456983 3236 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:06:02.457014 kubelet[3236]: I0702 09:06:02.457007 3236 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:06:02.457180 kubelet[3236]: I0702 09:06:02.457035 3236 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:06:02.457223 kubelet[3236]: I0702 09:06:02.457202 3236 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 09:06:02.457252 kubelet[3236]: I0702 09:06:02.457229 3236 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 09:06:02.457252 kubelet[3236]: I0702 09:06:02.457236 3236 policy_none.go:49] "None policy: Start" Jul 2 09:06:02.457986 kubelet[3236]: I0702 09:06:02.457962 3236 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:06:02.458054 kubelet[3236]: I0702 09:06:02.457994 3236 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:06:02.458199 kubelet[3236]: I0702 09:06:02.458176 3236 state_mem.go:75] "Updated machine memory state" Jul 2 09:06:02.463031 kubelet[3236]: I0702 09:06:02.462470 3236 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:06:02.463031 kubelet[3236]: I0702 09:06:02.462717 3236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:06:02.474750 kubelet[3236]: I0702 09:06:02.474718 3236 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.482641 kubelet[3236]: I0702 09:06:02.481156 3236 topology_manager.go:215] "Topology Admit Handler" podUID="48980a00345c16b4dcdd036bfda2807a" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.482641 kubelet[3236]: I0702 09:06:02.481273 3236 topology_manager.go:215] "Topology Admit Handler" podUID="47264e6b6514801b614a0cb8bef863a2" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.482641 kubelet[3236]: I0702 09:06:02.481340 3236 topology_manager.go:215] "Topology Admit Handler" podUID="e9b30fd8544e9423d640c13021120caa" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.493858 kubelet[3236]: W0702 09:06:02.493832 3236 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 09:06:02.494210 kubelet[3236]: W0702 09:06:02.494197 3236 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 09:06:02.494664 kubelet[3236]: E0702 09:06:02.494642 3236 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.1.1-a-59f2e70dce\" already exists" pod="kube-system/kube-scheduler-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.497826 kubelet[3236]: I0702 09:06:02.497779 3236 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.497952 kubelet[3236]: I0702 09:06:02.497885 3236 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.502529 kubelet[3236]: W0702 09:06:02.502382 3236 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 09:06:02.502529 kubelet[3236]: E0702 09:06:02.502456 3236 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.579922 kubelet[3236]: I0702 09:06:02.579258 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48980a00345c16b4dcdd036bfda2807a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" (UID: \"48980a00345c16b4dcdd036bfda2807a\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.579922 kubelet[3236]: I0702 09:06:02.579303 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.579922 kubelet[3236]: I0702 09:06:02.579325 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.579922 kubelet[3236]: I0702 09:06:02.579347 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9b30fd8544e9423d640c13021120caa-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-59f2e70dce\" (UID: \"e9b30fd8544e9423d640c13021120caa\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.579922 kubelet[3236]: I0702 09:06:02.579379 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48980a00345c16b4dcdd036bfda2807a-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" (UID: \"48980a00345c16b4dcdd036bfda2807a\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.580157 kubelet[3236]: I0702 09:06:02.579400 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.580157 kubelet[3236]: I0702 09:06:02.579419 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.580157 kubelet[3236]: I0702 09:06:02.579441 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47264e6b6514801b614a0cb8bef863a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-59f2e70dce\" (UID: \"47264e6b6514801b614a0cb8bef863a2\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:02.580157 kubelet[3236]: I0702 09:06:02.579469 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48980a00345c16b4dcdd036bfda2807a-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-59f2e70dce\" (UID: \"48980a00345c16b4dcdd036bfda2807a\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:03.358040 kubelet[3236]: I0702 09:06:03.357967 3236 apiserver.go:52] "Watching apiserver" Jul 2 09:06:03.378909 kubelet[3236]: I0702 09:06:03.378759 3236 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:06:03.537013 kubelet[3236]: I0702 09:06:03.536969 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-a-59f2e70dce" podStartSLOduration=3.536920935 podStartE2EDuration="3.536920935s" podCreationTimestamp="2024-07-02 09:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:06:03.504467653 +0000 UTC m=+1.220053796" watchObservedRunningTime="2024-07-02 09:06:03.536920935 +0000 UTC m=+1.252507078" Jul 2 09:06:03.583209 kubelet[3236]: I0702 09:06:03.582976 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-a-59f2e70dce" podStartSLOduration=2.582923828 podStartE2EDuration="2.582923828s" podCreationTimestamp="2024-07-02 09:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:06:03.538105939 +0000 UTC m=+1.253692082" watchObservedRunningTime="2024-07-02 09:06:03.582923828 +0000 UTC m=+1.298509971" Jul 2 09:06:03.583209 kubelet[3236]: I0702 09:06:03.583096 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-a-59f2e70dce" podStartSLOduration=1.5830794689999999 podStartE2EDuration="1.583079469s" podCreationTimestamp="2024-07-02 09:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:06:03.580933861 +0000 UTC m=+1.296520044" watchObservedRunningTime="2024-07-02 09:06:03.583079469 +0000 UTC m=+1.298665612" Jul 2 09:06:07.209858 sudo[2205]: pam_unix(sudo:session): session closed for user root Jul 2 09:06:07.292694 sshd[2186]: pam_unix(sshd:session): session closed for user core Jul 2 09:06:07.296954 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:37426.service: Deactivated successfully. Jul 2 09:06:07.299298 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 09:06:07.299849 systemd[1]: session-9.scope: Consumed 7.130s CPU time, 133.4M memory peak, 0B memory swap peak. Jul 2 09:06:07.301565 systemd-logind[1665]: Session 9 logged out. Waiting for processes to exit. Jul 2 09:06:07.302882 systemd-logind[1665]: Removed session 9. Jul 2 09:06:16.566594 kubelet[3236]: I0702 09:06:16.566550 3236 topology_manager.go:215] "Topology Admit Handler" podUID="a7f1ff58-555a-40d2-be26-20a50177bf24" podNamespace="kube-system" podName="kube-proxy-mqvbb" Jul 2 09:06:16.575704 systemd[1]: Created slice kubepods-besteffort-poda7f1ff58_555a_40d2_be26_20a50177bf24.slice - libcontainer container kubepods-besteffort-poda7f1ff58_555a_40d2_be26_20a50177bf24.slice. Jul 2 09:06:16.668311 kubelet[3236]: I0702 09:06:16.668136 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7f1ff58-555a-40d2-be26-20a50177bf24-xtables-lock\") pod \"kube-proxy-mqvbb\" (UID: \"a7f1ff58-555a-40d2-be26-20a50177bf24\") " pod="kube-system/kube-proxy-mqvbb" Jul 2 09:06:16.668311 kubelet[3236]: I0702 09:06:16.668192 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7f1ff58-555a-40d2-be26-20a50177bf24-kube-proxy\") pod \"kube-proxy-mqvbb\" (UID: \"a7f1ff58-555a-40d2-be26-20a50177bf24\") " pod="kube-system/kube-proxy-mqvbb" Jul 2 09:06:16.668311 kubelet[3236]: I0702 09:06:16.668219 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbd2j\" (UniqueName: \"kubernetes.io/projected/a7f1ff58-555a-40d2-be26-20a50177bf24-kube-api-access-xbd2j\") pod \"kube-proxy-mqvbb\" (UID: \"a7f1ff58-555a-40d2-be26-20a50177bf24\") " pod="kube-system/kube-proxy-mqvbb" Jul 2 09:06:16.668311 kubelet[3236]: I0702 09:06:16.668239 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7f1ff58-555a-40d2-be26-20a50177bf24-lib-modules\") pod \"kube-proxy-mqvbb\" (UID: \"a7f1ff58-555a-40d2-be26-20a50177bf24\") " pod="kube-system/kube-proxy-mqvbb" Jul 2 09:06:16.685731 kubelet[3236]: I0702 09:06:16.685510 3236 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 09:06:16.687262 containerd[1697]: time="2024-07-02T09:06:16.686644512Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 09:06:16.688430 kubelet[3236]: I0702 09:06:16.688112 3236 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 09:06:16.776790 kubelet[3236]: E0702 09:06:16.776749 3236 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 09:06:16.776790 kubelet[3236]: E0702 09:06:16.776790 3236 projected.go:200] Error preparing data for projected volume kube-api-access-xbd2j for pod kube-system/kube-proxy-mqvbb: configmap "kube-root-ca.crt" not found Jul 2 09:06:16.776966 kubelet[3236]: E0702 09:06:16.776867 3236 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7f1ff58-555a-40d2-be26-20a50177bf24-kube-api-access-xbd2j podName:a7f1ff58-555a-40d2-be26-20a50177bf24 nodeName:}" failed. No retries permitted until 2024-07-02 09:06:17.276843439 +0000 UTC m=+14.992429582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xbd2j" (UniqueName: "kubernetes.io/projected/a7f1ff58-555a-40d2-be26-20a50177bf24-kube-api-access-xbd2j") pod "kube-proxy-mqvbb" (UID: "a7f1ff58-555a-40d2-be26-20a50177bf24") : configmap "kube-root-ca.crt" not found Jul 2 09:06:17.451289 kubelet[3236]: I0702 09:06:17.450554 3236 topology_manager.go:215] "Topology Admit Handler" podUID="74933842-2f42-4cd5-a700-8133b9c74a82" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-2tfkl" Jul 2 09:06:17.459591 systemd[1]: Created slice kubepods-besteffort-pod74933842_2f42_4cd5_a700_8133b9c74a82.slice - libcontainer container kubepods-besteffort-pod74933842_2f42_4cd5_a700_8133b9c74a82.slice. Jul 2 09:06:17.473860 kubelet[3236]: I0702 09:06:17.473801 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74933842-2f42-4cd5-a700-8133b9c74a82-var-lib-calico\") pod \"tigera-operator-76c4974c85-2tfkl\" (UID: \"74933842-2f42-4cd5-a700-8133b9c74a82\") " pod="tigera-operator/tigera-operator-76c4974c85-2tfkl" Jul 2 09:06:17.473860 kubelet[3236]: I0702 09:06:17.473855 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcwwn\" (UniqueName: \"kubernetes.io/projected/74933842-2f42-4cd5-a700-8133b9c74a82-kube-api-access-wcwwn\") pod \"tigera-operator-76c4974c85-2tfkl\" (UID: \"74933842-2f42-4cd5-a700-8133b9c74a82\") " pod="tigera-operator/tigera-operator-76c4974c85-2tfkl" Jul 2 09:06:17.485162 containerd[1697]: time="2024-07-02T09:06:17.485110780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqvbb,Uid:a7f1ff58-555a-40d2-be26-20a50177bf24,Namespace:kube-system,Attempt:0,}" Jul 2 09:06:17.535751 containerd[1697]: time="2024-07-02T09:06:17.535586696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:17.535751 containerd[1697]: time="2024-07-02T09:06:17.535657976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:17.535751 containerd[1697]: time="2024-07-02T09:06:17.535685616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:17.535751 containerd[1697]: time="2024-07-02T09:06:17.535706816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:17.558599 systemd[1]: Started cri-containerd-df1836eedd18ad130f60a3d6fc858474ba994d9a8d2c743ad76914f9d40a3b17.scope - libcontainer container df1836eedd18ad130f60a3d6fc858474ba994d9a8d2c743ad76914f9d40a3b17. Jul 2 09:06:17.585834 containerd[1697]: time="2024-07-02T09:06:17.585742210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqvbb,Uid:a7f1ff58-555a-40d2-be26-20a50177bf24,Namespace:kube-system,Attempt:0,} returns sandbox id \"df1836eedd18ad130f60a3d6fc858474ba994d9a8d2c743ad76914f9d40a3b17\"" Jul 2 09:06:17.592068 containerd[1697]: time="2024-07-02T09:06:17.592001305Z" level=info msg="CreateContainer within sandbox \"df1836eedd18ad130f60a3d6fc858474ba994d9a8d2c743ad76914f9d40a3b17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 09:06:17.626962 containerd[1697]: time="2024-07-02T09:06:17.626849224Z" level=info msg="CreateContainer within sandbox \"df1836eedd18ad130f60a3d6fc858474ba994d9a8d2c743ad76914f9d40a3b17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f99d05a7ba1f64ac7d22ff9f04ec59b9974e7a0f07e412771892cecf9862c76\"" Jul 2 09:06:17.629689 containerd[1697]: time="2024-07-02T09:06:17.629538271Z" level=info msg="StartContainer for \"9f99d05a7ba1f64ac7d22ff9f04ec59b9974e7a0f07e412771892cecf9862c76\"" Jul 2 09:06:17.655971 systemd[1]: Started cri-containerd-9f99d05a7ba1f64ac7d22ff9f04ec59b9974e7a0f07e412771892cecf9862c76.scope - libcontainer container 9f99d05a7ba1f64ac7d22ff9f04ec59b9974e7a0f07e412771892cecf9862c76. Jul 2 09:06:17.689441 containerd[1697]: time="2024-07-02T09:06:17.689331527Z" level=info msg="StartContainer for \"9f99d05a7ba1f64ac7d22ff9f04ec59b9974e7a0f07e412771892cecf9862c76\" returns successfully" Jul 2 09:06:17.766391 containerd[1697]: time="2024-07-02T09:06:17.766323024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-2tfkl,Uid:74933842-2f42-4cd5-a700-8133b9c74a82,Namespace:tigera-operator,Attempt:0,}" Jul 2 09:06:17.804810 containerd[1697]: time="2024-07-02T09:06:17.804700552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:17.804810 containerd[1697]: time="2024-07-02T09:06:17.804769632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:17.805465 containerd[1697]: time="2024-07-02T09:06:17.805233513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:17.805465 containerd[1697]: time="2024-07-02T09:06:17.805271833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:17.825052 systemd[1]: Started cri-containerd-de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176.scope - libcontainer container de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176. Jul 2 09:06:17.855383 containerd[1697]: time="2024-07-02T09:06:17.855307667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-2tfkl,Uid:74933842-2f42-4cd5-a700-8133b9c74a82,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176\"" Jul 2 09:06:17.860345 containerd[1697]: time="2024-07-02T09:06:17.859550197Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 09:06:18.479194 kubelet[3236]: I0702 09:06:18.479043 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mqvbb" podStartSLOduration=2.478999135 podStartE2EDuration="2.478999135s" podCreationTimestamp="2024-07-02 09:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:06:18.478845975 +0000 UTC m=+16.194432118" watchObservedRunningTime="2024-07-02 09:06:18.478999135 +0000 UTC m=+16.194585278" Jul 2 09:06:19.266633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585092610.mount: Deactivated successfully. Jul 2 09:06:19.665965 containerd[1697]: time="2024-07-02T09:06:19.665258610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:19.674385 containerd[1697]: time="2024-07-02T09:06:19.674276871Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473618" Jul 2 09:06:19.680663 containerd[1697]: time="2024-07-02T09:06:19.680595565Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:19.685369 containerd[1697]: time="2024-07-02T09:06:19.685118616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:19.686496 containerd[1697]: time="2024-07-02T09:06:19.686424099Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.826827782s" Jul 2 09:06:19.686496 containerd[1697]: time="2024-07-02T09:06:19.686461499Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 09:06:19.689618 containerd[1697]: time="2024-07-02T09:06:19.689571426Z" level=info msg="CreateContainer within sandbox \"de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 09:06:19.726345 containerd[1697]: time="2024-07-02T09:06:19.726288990Z" level=info msg="CreateContainer within sandbox \"de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea\"" Jul 2 09:06:19.726846 containerd[1697]: time="2024-07-02T09:06:19.726818511Z" level=info msg="StartContainer for \"c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea\"" Jul 2 09:06:19.756565 systemd[1]: Started cri-containerd-c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea.scope - libcontainer container c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea. Jul 2 09:06:19.784858 containerd[1697]: time="2024-07-02T09:06:19.784683924Z" level=info msg="StartContainer for \"c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea\" returns successfully" Jul 2 09:06:22.400424 kubelet[3236]: I0702 09:06:22.400212 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-2tfkl" podStartSLOduration=3.5701470410000002 podStartE2EDuration="5.40016743s" podCreationTimestamp="2024-07-02 09:06:17 +0000 UTC" firstStartedPulling="2024-07-02 09:06:17.85663551 +0000 UTC m=+15.572221653" lastFinishedPulling="2024-07-02 09:06:19.686655899 +0000 UTC m=+17.402242042" observedRunningTime="2024-07-02 09:06:20.48188892 +0000 UTC m=+18.197475023" watchObservedRunningTime="2024-07-02 09:06:22.40016743 +0000 UTC m=+20.115753573" Jul 2 09:06:23.413474 kubelet[3236]: I0702 09:06:23.411542 3236 topology_manager.go:215] "Topology Admit Handler" podUID="e8aee6a3-9e52-4dc3-a039-04bcb7faed80" podNamespace="calico-system" podName="calico-typha-58c87ff99b-z4h6x" Jul 2 09:06:23.420605 kubelet[3236]: W0702 09:06:23.420569 3236 reflector.go:539] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3975.1.1-a-59f2e70dce" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975.1.1-a-59f2e70dce' and this object Jul 2 09:06:23.420605 kubelet[3236]: E0702 09:06:23.420610 3236 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3975.1.1-a-59f2e70dce" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975.1.1-a-59f2e70dce' and this object Jul 2 09:06:23.422591 systemd[1]: Created slice kubepods-besteffort-pode8aee6a3_9e52_4dc3_a039_04bcb7faed80.slice - libcontainer container kubepods-besteffort-pode8aee6a3_9e52_4dc3_a039_04bcb7faed80.slice. Jul 2 09:06:23.508830 kubelet[3236]: I0702 09:06:23.508765 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8aee6a3-9e52-4dc3-a039-04bcb7faed80-tigera-ca-bundle\") pod \"calico-typha-58c87ff99b-z4h6x\" (UID: \"e8aee6a3-9e52-4dc3-a039-04bcb7faed80\") " pod="calico-system/calico-typha-58c87ff99b-z4h6x" Jul 2 09:06:23.508830 kubelet[3236]: I0702 09:06:23.508826 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8aee6a3-9e52-4dc3-a039-04bcb7faed80-typha-certs\") pod \"calico-typha-58c87ff99b-z4h6x\" (UID: \"e8aee6a3-9e52-4dc3-a039-04bcb7faed80\") " pod="calico-system/calico-typha-58c87ff99b-z4h6x" Jul 2 09:06:23.509005 kubelet[3236]: I0702 09:06:23.508852 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcphj\" (UniqueName: \"kubernetes.io/projected/e8aee6a3-9e52-4dc3-a039-04bcb7faed80-kube-api-access-tcphj\") pod \"calico-typha-58c87ff99b-z4h6x\" (UID: \"e8aee6a3-9e52-4dc3-a039-04bcb7faed80\") " pod="calico-system/calico-typha-58c87ff99b-z4h6x" Jul 2 09:06:23.522897 kubelet[3236]: I0702 09:06:23.522853 3236 topology_manager.go:215] "Topology Admit Handler" podUID="febc3671-e0de-4af4-a5e3-9fe9739f3d7a" podNamespace="calico-system" podName="calico-node-9nqkn" Jul 2 09:06:23.534512 systemd[1]: Created slice kubepods-besteffort-podfebc3671_e0de_4af4_a5e3_9fe9739f3d7a.slice - libcontainer container kubepods-besteffort-podfebc3671_e0de_4af4_a5e3_9fe9739f3d7a.slice. Jul 2 09:06:23.540027 kubelet[3236]: W0702 09:06:23.539961 3236 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-3975.1.1-a-59f2e70dce" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975.1.1-a-59f2e70dce' and this object Jul 2 09:06:23.540027 kubelet[3236]: E0702 09:06:23.540003 3236 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-3975.1.1-a-59f2e70dce" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975.1.1-a-59f2e70dce' and this object Jul 2 09:06:23.541330 kubelet[3236]: W0702 09:06:23.541290 3236 reflector.go:539] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-3975.1.1-a-59f2e70dce" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975.1.1-a-59f2e70dce' and this object Jul 2 09:06:23.541330 kubelet[3236]: E0702 09:06:23.541330 3236 reflector.go:147] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-3975.1.1-a-59f2e70dce" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3975.1.1-a-59f2e70dce' and this object Jul 2 09:06:23.609267 kubelet[3236]: I0702 09:06:23.609221 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-xtables-lock\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.609267 kubelet[3236]: I0702 09:06:23.609276 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-flexvol-driver-host\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612278 kubelet[3236]: I0702 09:06:23.609297 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-node-certs\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612278 kubelet[3236]: I0702 09:06:23.609320 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-var-run-calico\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612278 kubelet[3236]: I0702 09:06:23.609350 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m267j\" (UniqueName: \"kubernetes.io/projected/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-kube-api-access-m267j\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612278 kubelet[3236]: I0702 09:06:23.609399 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-cni-log-dir\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612278 kubelet[3236]: I0702 09:06:23.609418 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-var-lib-calico\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612907 kubelet[3236]: I0702 09:06:23.609438 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-cni-net-dir\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612907 kubelet[3236]: I0702 09:06:23.609469 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-policysync\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612907 kubelet[3236]: I0702 09:06:23.609552 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-lib-modules\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612907 kubelet[3236]: I0702 09:06:23.609595 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-tigera-ca-bundle\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.612907 kubelet[3236]: I0702 09:06:23.609616 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/febc3671-e0de-4af4-a5e3-9fe9739f3d7a-cni-bin-dir\") pod \"calico-node-9nqkn\" (UID: \"febc3671-e0de-4af4-a5e3-9fe9739f3d7a\") " pod="calico-system/calico-node-9nqkn" Jul 2 09:06:23.655831 kubelet[3236]: I0702 09:06:23.655243 3236 topology_manager.go:215] "Topology Admit Handler" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" podNamespace="calico-system" podName="csi-node-driver-zctz4" Jul 2 09:06:23.655831 kubelet[3236]: E0702 09:06:23.655523 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zctz4" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" Jul 2 09:06:23.712477 kubelet[3236]: I0702 09:06:23.709916 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7c64a89-d23b-4bef-9c27-bbb0ad23595e-kubelet-dir\") pod \"csi-node-driver-zctz4\" (UID: \"c7c64a89-d23b-4bef-9c27-bbb0ad23595e\") " pod="calico-system/csi-node-driver-zctz4" Jul 2 09:06:23.712477 kubelet[3236]: I0702 09:06:23.710025 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4tt\" (UniqueName: \"kubernetes.io/projected/c7c64a89-d23b-4bef-9c27-bbb0ad23595e-kube-api-access-wg4tt\") pod \"csi-node-driver-zctz4\" (UID: \"c7c64a89-d23b-4bef-9c27-bbb0ad23595e\") " pod="calico-system/csi-node-driver-zctz4" Jul 2 09:06:23.712477 kubelet[3236]: I0702 09:06:23.710072 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7c64a89-d23b-4bef-9c27-bbb0ad23595e-registration-dir\") pod \"csi-node-driver-zctz4\" (UID: \"c7c64a89-d23b-4bef-9c27-bbb0ad23595e\") " pod="calico-system/csi-node-driver-zctz4" Jul 2 09:06:23.712477 kubelet[3236]: I0702 09:06:23.710115 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c7c64a89-d23b-4bef-9c27-bbb0ad23595e-varrun\") pod \"csi-node-driver-zctz4\" (UID: \"c7c64a89-d23b-4bef-9c27-bbb0ad23595e\") " pod="calico-system/csi-node-driver-zctz4" Jul 2 09:06:23.712477 kubelet[3236]: I0702 09:06:23.710147 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7c64a89-d23b-4bef-9c27-bbb0ad23595e-socket-dir\") pod \"csi-node-driver-zctz4\" (UID: \"c7c64a89-d23b-4bef-9c27-bbb0ad23595e\") " pod="calico-system/csi-node-driver-zctz4" Jul 2 09:06:23.748483 kubelet[3236]: E0702 09:06:23.748450 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.748483 kubelet[3236]: W0702 09:06:23.748473 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.748630 kubelet[3236]: E0702 09:06:23.748507 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.810996 kubelet[3236]: E0702 09:06:23.810958 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.810996 kubelet[3236]: W0702 09:06:23.810984 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.810996 kubelet[3236]: E0702 09:06:23.811009 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.813247 kubelet[3236]: E0702 09:06:23.813211 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.813247 kubelet[3236]: W0702 09:06:23.813236 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.813418 kubelet[3236]: E0702 09:06:23.813269 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.813521 kubelet[3236]: E0702 09:06:23.813505 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.813521 kubelet[3236]: W0702 09:06:23.813519 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.813693 kubelet[3236]: E0702 09:06:23.813586 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.813892 kubelet[3236]: E0702 09:06:23.813873 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.813892 kubelet[3236]: W0702 09:06:23.813890 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.813991 kubelet[3236]: E0702 09:06:23.813910 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.814108 kubelet[3236]: E0702 09:06:23.814091 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.814108 kubelet[3236]: W0702 09:06:23.814105 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.814222 kubelet[3236]: E0702 09:06:23.814123 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.814466 kubelet[3236]: E0702 09:06:23.814445 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.814466 kubelet[3236]: W0702 09:06:23.814463 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.814585 kubelet[3236]: E0702 09:06:23.814483 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.814725 kubelet[3236]: E0702 09:06:23.814705 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.814725 kubelet[3236]: W0702 09:06:23.814721 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.814804 kubelet[3236]: E0702 09:06:23.814739 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.817424 kubelet[3236]: E0702 09:06:23.816976 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.817424 kubelet[3236]: W0702 09:06:23.817002 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.817424 kubelet[3236]: E0702 09:06:23.817066 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.817886 kubelet[3236]: E0702 09:06:23.817854 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.817886 kubelet[3236]: W0702 09:06:23.817883 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.817989 kubelet[3236]: E0702 09:06:23.817939 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.819369 kubelet[3236]: E0702 09:06:23.818285 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.819369 kubelet[3236]: W0702 09:06:23.818302 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.819369 kubelet[3236]: E0702 09:06:23.818432 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.819814 kubelet[3236]: E0702 09:06:23.819689 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.819814 kubelet[3236]: W0702 09:06:23.819808 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.819885 kubelet[3236]: E0702 09:06:23.819833 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.820205 kubelet[3236]: E0702 09:06:23.820083 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.820205 kubelet[3236]: W0702 09:06:23.820201 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.820394 kubelet[3236]: E0702 09:06:23.820348 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.820673 kubelet[3236]: E0702 09:06:23.820652 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.820673 kubelet[3236]: W0702 09:06:23.820669 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.820784 kubelet[3236]: E0702 09:06:23.820764 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.821186 kubelet[3236]: E0702 09:06:23.821166 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.821186 kubelet[3236]: W0702 09:06:23.821184 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.821382 kubelet[3236]: E0702 09:06:23.821349 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.821652 kubelet[3236]: E0702 09:06:23.821630 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.821652 kubelet[3236]: W0702 09:06:23.821648 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.821718 kubelet[3236]: E0702 09:06:23.821665 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.822026 kubelet[3236]: E0702 09:06:23.822005 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.822026 kubelet[3236]: W0702 09:06:23.822022 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.822293 kubelet[3236]: E0702 09:06:23.822210 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.822420 kubelet[3236]: E0702 09:06:23.822403 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.822420 kubelet[3236]: W0702 09:06:23.822418 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.822603 kubelet[3236]: E0702 09:06:23.822584 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.822910 kubelet[3236]: E0702 09:06:23.822890 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.822910 kubelet[3236]: W0702 09:06:23.822906 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.823041 kubelet[3236]: E0702 09:06:23.823022 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.823319 kubelet[3236]: E0702 09:06:23.823298 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.823319 kubelet[3236]: W0702 09:06:23.823312 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.823474 kubelet[3236]: E0702 09:06:23.823443 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.824498 kubelet[3236]: E0702 09:06:23.824464 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.824498 kubelet[3236]: W0702 09:06:23.824492 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.824589 kubelet[3236]: E0702 09:06:23.824521 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.824984 kubelet[3236]: E0702 09:06:23.824957 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.824984 kubelet[3236]: W0702 09:06:23.824974 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.825069 kubelet[3236]: E0702 09:06:23.825029 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.825330 kubelet[3236]: E0702 09:06:23.825206 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.825330 kubelet[3236]: W0702 09:06:23.825220 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.825330 kubelet[3236]: E0702 09:06:23.825309 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.825486 kubelet[3236]: E0702 09:06:23.825426 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.825486 kubelet[3236]: W0702 09:06:23.825434 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.825664 kubelet[3236]: E0702 09:06:23.825565 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.825664 kubelet[3236]: E0702 09:06:23.825632 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.825664 kubelet[3236]: W0702 09:06:23.825638 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.825664 kubelet[3236]: E0702 09:06:23.825653 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.828393 kubelet[3236]: E0702 09:06:23.828327 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.829107 kubelet[3236]: W0702 09:06:23.828350 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.829107 kubelet[3236]: E0702 09:06:23.828971 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.829458 kubelet[3236]: E0702 09:06:23.829444 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.829721 kubelet[3236]: W0702 09:06:23.829697 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.829796 kubelet[3236]: E0702 09:06:23.829786 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.833494 kubelet[3236]: E0702 09:06:23.833187 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.833494 kubelet[3236]: W0702 09:06:23.833209 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.833494 kubelet[3236]: E0702 09:06:23.833242 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.833797 kubelet[3236]: E0702 09:06:23.833774 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.833797 kubelet[3236]: W0702 09:06:23.833792 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.834002 kubelet[3236]: E0702 09:06:23.833808 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.923651 kubelet[3236]: E0702 09:06:23.923610 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.923651 kubelet[3236]: W0702 09:06:23.923639 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.924026 kubelet[3236]: E0702 09:06:23.923666 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:23.924026 kubelet[3236]: E0702 09:06:23.923889 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:23.924026 kubelet[3236]: W0702 09:06:23.923902 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:23.924026 kubelet[3236]: E0702 09:06:23.923914 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.025979 kubelet[3236]: E0702 09:06:24.025941 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.025979 kubelet[3236]: W0702 09:06:24.025968 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.026173 kubelet[3236]: E0702 09:06:24.025993 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.026173 kubelet[3236]: E0702 09:06:24.026163 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.026173 kubelet[3236]: W0702 09:06:24.026170 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.026237 kubelet[3236]: E0702 09:06:24.026181 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.128019 kubelet[3236]: E0702 09:06:24.127974 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.128019 kubelet[3236]: W0702 09:06:24.128004 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.128019 kubelet[3236]: E0702 09:06:24.128029 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.128222 kubelet[3236]: E0702 09:06:24.128209 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.128222 kubelet[3236]: W0702 09:06:24.128217 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.128267 kubelet[3236]: E0702 09:06:24.128228 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.228896 kubelet[3236]: E0702 09:06:24.228745 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.228896 kubelet[3236]: W0702 09:06:24.228771 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.228896 kubelet[3236]: E0702 09:06:24.228792 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.230658 kubelet[3236]: E0702 09:06:24.230597 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.230658 kubelet[3236]: W0702 09:06:24.230643 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.230658 kubelet[3236]: E0702 09:06:24.230666 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.332268 kubelet[3236]: E0702 09:06:24.332023 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.332268 kubelet[3236]: W0702 09:06:24.332054 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.332268 kubelet[3236]: E0702 09:06:24.332076 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.333110 kubelet[3236]: E0702 09:06:24.332950 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.333110 kubelet[3236]: W0702 09:06:24.332972 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.333110 kubelet[3236]: E0702 09:06:24.333033 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.435282 kubelet[3236]: E0702 09:06:24.434949 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.435282 kubelet[3236]: W0702 09:06:24.434977 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.435282 kubelet[3236]: E0702 09:06:24.435003 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.435727 kubelet[3236]: E0702 09:06:24.435318 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.435727 kubelet[3236]: W0702 09:06:24.435329 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.435727 kubelet[3236]: E0702 09:06:24.435345 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.518660 kubelet[3236]: E0702 09:06:24.518614 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.518660 kubelet[3236]: W0702 09:06:24.518644 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.518660 kubelet[3236]: E0702 09:06:24.518669 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.536830 kubelet[3236]: E0702 09:06:24.536793 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.536830 kubelet[3236]: W0702 09:06:24.536819 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.536830 kubelet[3236]: E0702 09:06:24.536843 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.610239 kubelet[3236]: E0702 09:06:24.609818 3236 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 09:06:24.610239 kubelet[3236]: E0702 09:06:24.609922 3236 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8aee6a3-9e52-4dc3-a039-04bcb7faed80-typha-certs podName:e8aee6a3-9e52-4dc3-a039-04bcb7faed80 nodeName:}" failed. No retries permitted until 2024-07-02 09:06:25.109902135 +0000 UTC m=+22.825488238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/e8aee6a3-9e52-4dc3-a039-04bcb7faed80-typha-certs") pod "calico-typha-58c87ff99b-z4h6x" (UID: "e8aee6a3-9e52-4dc3-a039-04bcb7faed80") : failed to sync secret cache: timed out waiting for the condition Jul 2 09:06:24.637404 kubelet[3236]: E0702 09:06:24.637350 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.637404 kubelet[3236]: W0702 09:06:24.637392 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.637404 kubelet[3236]: E0702 09:06:24.637413 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.738516 kubelet[3236]: E0702 09:06:24.738336 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.738516 kubelet[3236]: W0702 09:06:24.738377 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.738516 kubelet[3236]: E0702 09:06:24.738404 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.740415 containerd[1697]: time="2024-07-02T09:06:24.740350319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9nqkn,Uid:febc3671-e0de-4af4-a5e3-9fe9739f3d7a,Namespace:calico-system,Attempt:0,}" Jul 2 09:06:24.781205 containerd[1697]: time="2024-07-02T09:06:24.781051493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:24.781205 containerd[1697]: time="2024-07-02T09:06:24.781135974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:24.781205 containerd[1697]: time="2024-07-02T09:06:24.781155614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:24.781205 containerd[1697]: time="2024-07-02T09:06:24.781170694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:24.808581 systemd[1]: Started cri-containerd-0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667.scope - libcontainer container 0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667. Jul 2 09:06:24.831271 containerd[1697]: time="2024-07-02T09:06:24.831188730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9nqkn,Uid:febc3671-e0de-4af4-a5e3-9fe9739f3d7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667\"" Jul 2 09:06:24.834387 containerd[1697]: time="2024-07-02T09:06:24.834097217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 09:06:24.839680 kubelet[3236]: E0702 09:06:24.839655 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.839883 kubelet[3236]: W0702 09:06:24.839816 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.839883 kubelet[3236]: E0702 09:06:24.839846 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:24.941475 kubelet[3236]: E0702 09:06:24.940770 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:24.941475 kubelet[3236]: W0702 09:06:24.941365 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:24.941783 kubelet[3236]: E0702 09:06:24.941399 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.043115 kubelet[3236]: E0702 09:06:25.043073 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:25.043115 kubelet[3236]: W0702 09:06:25.043100 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:25.043115 kubelet[3236]: E0702 09:06:25.043124 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.144641 kubelet[3236]: E0702 09:06:25.144523 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:25.145252 kubelet[3236]: W0702 09:06:25.144874 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:25.145252 kubelet[3236]: E0702 09:06:25.144904 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.145487 kubelet[3236]: E0702 09:06:25.145342 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:25.145487 kubelet[3236]: W0702 09:06:25.145376 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:25.145487 kubelet[3236]: E0702 09:06:25.145394 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.145923 kubelet[3236]: E0702 09:06:25.145582 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:25.145923 kubelet[3236]: W0702 09:06:25.145591 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:25.145923 kubelet[3236]: E0702 09:06:25.145602 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.146453 kubelet[3236]: E0702 09:06:25.145953 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:25.146453 kubelet[3236]: W0702 09:06:25.145964 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:25.146453 kubelet[3236]: E0702 09:06:25.145978 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.146453 kubelet[3236]: E0702 09:06:25.146205 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:25.146453 kubelet[3236]: W0702 09:06:25.146213 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:25.146453 kubelet[3236]: E0702 09:06:25.146223 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.153899 kubelet[3236]: E0702 09:06:25.153870 3236 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:06:25.154136 kubelet[3236]: W0702 09:06:25.154035 3236 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:06:25.154136 kubelet[3236]: E0702 09:06:25.154066 3236 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:06:25.227899 containerd[1697]: time="2024-07-02T09:06:25.227643693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c87ff99b-z4h6x,Uid:e8aee6a3-9e52-4dc3-a039-04bcb7faed80,Namespace:calico-system,Attempt:0,}" Jul 2 09:06:25.267877 containerd[1697]: time="2024-07-02T09:06:25.267709307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:25.267877 containerd[1697]: time="2024-07-02T09:06:25.267783027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:25.267877 containerd[1697]: time="2024-07-02T09:06:25.267803307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:25.267877 containerd[1697]: time="2024-07-02T09:06:25.267830987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:25.287634 systemd[1]: Started cri-containerd-988551e27865ebaf1ccf38bb16b9c2ead16d41c68eb2cdc898dfa65c29f6385a.scope - libcontainer container 988551e27865ebaf1ccf38bb16b9c2ead16d41c68eb2cdc898dfa65c29f6385a. Jul 2 09:06:25.329548 containerd[1697]: time="2024-07-02T09:06:25.329461930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c87ff99b-z4h6x,Uid:e8aee6a3-9e52-4dc3-a039-04bcb7faed80,Namespace:calico-system,Attempt:0,} returns sandbox id \"988551e27865ebaf1ccf38bb16b9c2ead16d41c68eb2cdc898dfa65c29f6385a\"" Jul 2 09:06:25.381798 kubelet[3236]: E0702 09:06:25.381733 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zctz4" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" Jul 2 09:06:25.940724 containerd[1697]: time="2024-07-02T09:06:25.940540313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:25.944047 containerd[1697]: time="2024-07-02T09:06:25.943685041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 09:06:25.949976 containerd[1697]: time="2024-07-02T09:06:25.949745575Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:25.955961 containerd[1697]: time="2024-07-02T09:06:25.955878029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:25.957049 containerd[1697]: time="2024-07-02T09:06:25.956908471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.122762614s" Jul 2 09:06:25.957049 containerd[1697]: time="2024-07-02T09:06:25.956958792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 09:06:25.958970 containerd[1697]: time="2024-07-02T09:06:25.958192954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 09:06:25.961581 containerd[1697]: time="2024-07-02T09:06:25.961512602Z" level=info msg="CreateContainer within sandbox \"0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 09:06:25.998408 containerd[1697]: time="2024-07-02T09:06:25.998329968Z" level=info msg="CreateContainer within sandbox \"0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac\"" Jul 2 09:06:26.000465 containerd[1697]: time="2024-07-02T09:06:25.998998129Z" level=info msg="StartContainer for \"83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac\"" Jul 2 09:06:26.032570 systemd[1]: Started cri-containerd-83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac.scope - libcontainer container 83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac. Jul 2 09:06:26.068461 containerd[1697]: time="2024-07-02T09:06:26.068129890Z" level=info msg="StartContainer for \"83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac\" returns successfully" Jul 2 09:06:26.085789 systemd[1]: cri-containerd-83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac.scope: Deactivated successfully. Jul 2 09:06:26.118614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac-rootfs.mount: Deactivated successfully. Jul 2 09:06:26.647636 containerd[1697]: time="2024-07-02T09:06:26.647516959Z" level=info msg="shim disconnected" id=83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac namespace=k8s.io Jul 2 09:06:26.647636 containerd[1697]: time="2024-07-02T09:06:26.647595520Z" level=warning msg="cleaning up after shim disconnected" id=83867c9861ebfae27db808874fe33ee50657ae51af1dd67705b6974ff90363ac namespace=k8s.io Jul 2 09:06:26.647636 containerd[1697]: time="2024-07-02T09:06:26.647604440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:06:27.381878 kubelet[3236]: E0702 09:06:27.381833 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zctz4" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" Jul 2 09:06:28.092574 containerd[1697]: time="2024-07-02T09:06:28.092483724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:28.095460 containerd[1697]: time="2024-07-02T09:06:28.095393811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 09:06:28.100030 containerd[1697]: time="2024-07-02T09:06:28.099954941Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:28.104969 containerd[1697]: time="2024-07-02T09:06:28.104887033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:28.106216 containerd[1697]: time="2024-07-02T09:06:28.105631155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.147396561s" Jul 2 09:06:28.106216 containerd[1697]: time="2024-07-02T09:06:28.105672155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 09:06:28.107893 containerd[1697]: time="2024-07-02T09:06:28.107853720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 09:06:28.127742 containerd[1697]: time="2024-07-02T09:06:28.127648446Z" level=info msg="CreateContainer within sandbox \"988551e27865ebaf1ccf38bb16b9c2ead16d41c68eb2cdc898dfa65c29f6385a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 09:06:28.160948 containerd[1697]: time="2024-07-02T09:06:28.160889043Z" level=info msg="CreateContainer within sandbox \"988551e27865ebaf1ccf38bb16b9c2ead16d41c68eb2cdc898dfa65c29f6385a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7fd34891ef38307249dc933f27d1ec47409d1fb6f07e8e6b73aa886d17049c43\"" Jul 2 09:06:28.161948 containerd[1697]: time="2024-07-02T09:06:28.161911766Z" level=info msg="StartContainer for \"7fd34891ef38307249dc933f27d1ec47409d1fb6f07e8e6b73aa886d17049c43\"" Jul 2 09:06:28.193625 systemd[1]: Started cri-containerd-7fd34891ef38307249dc933f27d1ec47409d1fb6f07e8e6b73aa886d17049c43.scope - libcontainer container 7fd34891ef38307249dc933f27d1ec47409d1fb6f07e8e6b73aa886d17049c43. Jul 2 09:06:28.237907 containerd[1697]: time="2024-07-02T09:06:28.237707702Z" level=info msg="StartContainer for \"7fd34891ef38307249dc933f27d1ec47409d1fb6f07e8e6b73aa886d17049c43\" returns successfully" Jul 2 09:06:28.508067 kubelet[3236]: I0702 09:06:28.507638 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-58c87ff99b-z4h6x" podStartSLOduration=2.731790587 podStartE2EDuration="5.507590691s" podCreationTimestamp="2024-07-02 09:06:23 +0000 UTC" firstStartedPulling="2024-07-02 09:06:25.331136414 +0000 UTC m=+23.046722557" lastFinishedPulling="2024-07-02 09:06:28.106936518 +0000 UTC m=+25.822522661" observedRunningTime="2024-07-02 09:06:28.506714289 +0000 UTC m=+26.222300432" watchObservedRunningTime="2024-07-02 09:06:28.507590691 +0000 UTC m=+26.223176794" Jul 2 09:06:29.381511 kubelet[3236]: E0702 09:06:29.381454 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zctz4" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" Jul 2 09:06:29.499458 kubelet[3236]: I0702 09:06:29.499410 3236 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:06:30.827091 containerd[1697]: time="2024-07-02T09:06:30.826444850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:30.829205 containerd[1697]: time="2024-07-02T09:06:30.829008136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 09:06:30.831283 containerd[1697]: time="2024-07-02T09:06:30.831221781Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:30.835015 containerd[1697]: time="2024-07-02T09:06:30.834927870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:30.835871 containerd[1697]: time="2024-07-02T09:06:30.835718752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.727819072s" Jul 2 09:06:30.835871 containerd[1697]: time="2024-07-02T09:06:30.835759232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 09:06:30.839697 containerd[1697]: time="2024-07-02T09:06:30.839278480Z" level=info msg="CreateContainer within sandbox \"0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 09:06:30.875393 containerd[1697]: time="2024-07-02T09:06:30.875308564Z" level=info msg="CreateContainer within sandbox \"0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f\"" Jul 2 09:06:30.877206 containerd[1697]: time="2024-07-02T09:06:30.877145968Z" level=info msg="StartContainer for \"3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f\"" Jul 2 09:06:30.912597 systemd[1]: Started cri-containerd-3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f.scope - libcontainer container 3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f. Jul 2 09:06:30.942657 containerd[1697]: time="2024-07-02T09:06:30.942582480Z" level=info msg="StartContainer for \"3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f\" returns successfully" Jul 2 09:06:31.381156 kubelet[3236]: E0702 09:06:31.381106 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zctz4" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" Jul 2 09:06:31.881998 systemd[1]: cri-containerd-3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f.scope: Deactivated successfully. Jul 2 09:06:31.909927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f-rootfs.mount: Deactivated successfully. Jul 2 09:06:31.960866 kubelet[3236]: I0702 09:06:31.960470 3236 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 09:06:32.260628 kubelet[3236]: I0702 09:06:31.989920 3236 topology_manager.go:215] "Topology Admit Handler" podUID="505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a" podNamespace="kube-system" podName="coredns-76f75df574-j2fst" Jul 2 09:06:32.260628 kubelet[3236]: I0702 09:06:32.004958 3236 topology_manager.go:215] "Topology Admit Handler" podUID="667beeb3-e864-4cf6-8526-e51c0263f76a" podNamespace="kube-system" podName="coredns-76f75df574-rpz5d" Jul 2 09:06:32.260628 kubelet[3236]: I0702 09:06:32.008169 3236 topology_manager.go:215] "Topology Admit Handler" podUID="105f3c79-8c9f-4f4b-b6cd-24afabef9d5e" podNamespace="calico-system" podName="calico-kube-controllers-75647d58f7-5rvzc" Jul 2 09:06:32.260628 kubelet[3236]: I0702 09:06:32.094205 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/105f3c79-8c9f-4f4b-b6cd-24afabef9d5e-tigera-ca-bundle\") pod \"calico-kube-controllers-75647d58f7-5rvzc\" (UID: \"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e\") " pod="calico-system/calico-kube-controllers-75647d58f7-5rvzc" Jul 2 09:06:32.260628 kubelet[3236]: I0702 09:06:32.094512 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a-config-volume\") pod \"coredns-76f75df574-j2fst\" (UID: \"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a\") " pod="kube-system/coredns-76f75df574-j2fst" Jul 2 09:06:32.260628 kubelet[3236]: I0702 09:06:32.094650 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/667beeb3-e864-4cf6-8526-e51c0263f76a-config-volume\") pod \"coredns-76f75df574-rpz5d\" (UID: \"667beeb3-e864-4cf6-8526-e51c0263f76a\") " pod="kube-system/coredns-76f75df574-rpz5d" Jul 2 09:06:31.999516 systemd[1]: Created slice kubepods-burstable-pod505b2c6e_5cdd_4d6a_911b_bb1a7ce2e04a.slice - libcontainer container kubepods-burstable-pod505b2c6e_5cdd_4d6a_911b_bb1a7ce2e04a.slice. Jul 2 09:06:32.261081 kubelet[3236]: I0702 09:06:32.094726 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dp4s\" (UniqueName: \"kubernetes.io/projected/505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a-kube-api-access-2dp4s\") pod \"coredns-76f75df574-j2fst\" (UID: \"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a\") " pod="kube-system/coredns-76f75df574-j2fst" Jul 2 09:06:32.261081 kubelet[3236]: I0702 09:06:32.094759 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7plb6\" (UniqueName: \"kubernetes.io/projected/105f3c79-8c9f-4f4b-b6cd-24afabef9d5e-kube-api-access-7plb6\") pod \"calico-kube-controllers-75647d58f7-5rvzc\" (UID: \"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e\") " pod="calico-system/calico-kube-controllers-75647d58f7-5rvzc" Jul 2 09:06:32.261081 kubelet[3236]: I0702 09:06:32.094895 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkchf\" (UniqueName: \"kubernetes.io/projected/667beeb3-e864-4cf6-8526-e51c0263f76a-kube-api-access-mkchf\") pod \"coredns-76f75df574-rpz5d\" (UID: \"667beeb3-e864-4cf6-8526-e51c0263f76a\") " pod="kube-system/coredns-76f75df574-rpz5d" Jul 2 09:06:32.018068 systemd[1]: Created slice kubepods-burstable-pod667beeb3_e864_4cf6_8526_e51c0263f76a.slice - libcontainer container kubepods-burstable-pod667beeb3_e864_4cf6_8526_e51c0263f76a.slice. Jul 2 09:06:32.026717 systemd[1]: Created slice kubepods-besteffort-pod105f3c79_8c9f_4f4b_b6cd_24afabef9d5e.slice - libcontainer container kubepods-besteffort-pod105f3c79_8c9f_4f4b_b6cd_24afabef9d5e.slice. Jul 2 09:06:32.560553 containerd[1697]: time="2024-07-02T09:06:32.560428842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j2fst,Uid:505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a,Namespace:kube-system,Attempt:0,}" Jul 2 09:06:32.570392 containerd[1697]: time="2024-07-02T09:06:32.570253385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75647d58f7-5rvzc,Uid:105f3c79-8c9f-4f4b-b6cd-24afabef9d5e,Namespace:calico-system,Attempt:0,}" Jul 2 09:06:32.570695 containerd[1697]: time="2024-07-02T09:06:32.570653506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rpz5d,Uid:667beeb3-e864-4cf6-8526-e51c0263f76a,Namespace:kube-system,Attempt:0,}" Jul 2 09:06:32.895220 kubelet[3236]: I0702 09:06:32.894775 3236 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:06:33.391898 systemd[1]: Created slice kubepods-besteffort-podc7c64a89_d23b_4bef_9c27_bbb0ad23595e.slice - libcontainer container kubepods-besteffort-podc7c64a89_d23b_4bef_9c27_bbb0ad23595e.slice. Jul 2 09:06:33.394933 containerd[1697]: time="2024-07-02T09:06:33.394882263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zctz4,Uid:c7c64a89-d23b-4bef-9c27-bbb0ad23595e,Namespace:calico-system,Attempt:0,}" Jul 2 09:06:33.483031 containerd[1697]: time="2024-07-02T09:06:33.482935344Z" level=info msg="shim disconnected" id=3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f namespace=k8s.io Jul 2 09:06:33.483031 containerd[1697]: time="2024-07-02T09:06:33.482990304Z" level=warning msg="cleaning up after shim disconnected" id=3bfb81921a8b8a5baefc0c056fd749b8fbe1c922daa20e73452afea86346407f namespace=k8s.io Jul 2 09:06:33.483031 containerd[1697]: time="2024-07-02T09:06:33.482999024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:06:33.512130 containerd[1697]: time="2024-07-02T09:06:33.512065650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 09:06:33.681999 containerd[1697]: time="2024-07-02T09:06:33.681742156Z" level=error msg="Failed to destroy network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.685015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b-shm.mount: Deactivated successfully. Jul 2 09:06:33.685650 containerd[1697]: time="2024-07-02T09:06:33.685493445Z" level=error msg="encountered an error cleaning up failed sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.687571 containerd[1697]: time="2024-07-02T09:06:33.687486289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rpz5d,Uid:667beeb3-e864-4cf6-8526-e51c0263f76a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.688449 kubelet[3236]: E0702 09:06:33.688031 3236 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.688449 kubelet[3236]: E0702 09:06:33.688091 3236 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rpz5d" Jul 2 09:06:33.688449 kubelet[3236]: E0702 09:06:33.688110 3236 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rpz5d" Jul 2 09:06:33.688666 kubelet[3236]: E0702 09:06:33.688164 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rpz5d_kube-system(667beeb3-e864-4cf6-8526-e51c0263f76a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rpz5d_kube-system(667beeb3-e864-4cf6-8526-e51c0263f76a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rpz5d" podUID="667beeb3-e864-4cf6-8526-e51c0263f76a" Jul 2 09:06:33.701510 containerd[1697]: time="2024-07-02T09:06:33.701450281Z" level=error msg="Failed to destroy network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.701691 containerd[1697]: time="2024-07-02T09:06:33.701643602Z" level=error msg="Failed to destroy network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.702321 containerd[1697]: time="2024-07-02T09:06:33.702266363Z" level=error msg="encountered an error cleaning up failed sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.704175 containerd[1697]: time="2024-07-02T09:06:33.702351123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75647d58f7-5rvzc,Uid:105f3c79-8c9f-4f4b-b6cd-24afabef9d5e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.704570 containerd[1697]: time="2024-07-02T09:06:33.704097647Z" level=error msg="Failed to destroy network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.704583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05-shm.mount: Deactivated successfully. Jul 2 09:06:33.704738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244-shm.mount: Deactivated successfully. Jul 2 09:06:33.705380 kubelet[3236]: E0702 09:06:33.704880 3236 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.705380 kubelet[3236]: E0702 09:06:33.704932 3236 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75647d58f7-5rvzc" Jul 2 09:06:33.705380 kubelet[3236]: E0702 09:06:33.704953 3236 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75647d58f7-5rvzc" Jul 2 09:06:33.705544 kubelet[3236]: E0702 09:06:33.705011 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75647d58f7-5rvzc_calico-system(105f3c79-8c9f-4f4b-b6cd-24afabef9d5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75647d58f7-5rvzc_calico-system(105f3c79-8c9f-4f4b-b6cd-24afabef9d5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75647d58f7-5rvzc" podUID="105f3c79-8c9f-4f4b-b6cd-24afabef9d5e" Jul 2 09:06:33.707086 containerd[1697]: time="2024-07-02T09:06:33.704397448Z" level=error msg="encountered an error cleaning up failed sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.707086 containerd[1697]: time="2024-07-02T09:06:33.706795653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zctz4,Uid:c7c64a89-d23b-4bef-9c27-bbb0ad23595e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.707232 kubelet[3236]: E0702 09:06:33.707024 3236 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.707232 kubelet[3236]: E0702 09:06:33.707128 3236 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zctz4" Jul 2 09:06:33.707232 kubelet[3236]: E0702 09:06:33.707154 3236 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zctz4" Jul 2 09:06:33.707324 kubelet[3236]: E0702 09:06:33.707230 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zctz4_calico-system(c7c64a89-d23b-4bef-9c27-bbb0ad23595e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zctz4_calico-system(c7c64a89-d23b-4bef-9c27-bbb0ad23595e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zctz4" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" Jul 2 09:06:33.708700 containerd[1697]: time="2024-07-02T09:06:33.708474337Z" level=error msg="encountered an error cleaning up failed sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.708700 containerd[1697]: time="2024-07-02T09:06:33.708639658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j2fst,Uid:505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.709155 kubelet[3236]: E0702 09:06:33.708989 3236 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:33.709155 kubelet[3236]: E0702 09:06:33.709048 3236 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j2fst" Jul 2 09:06:33.709155 kubelet[3236]: E0702 09:06:33.709069 3236 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j2fst" Jul 2 09:06:33.709273 kubelet[3236]: E0702 09:06:33.709121 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j2fst_kube-system(505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j2fst_kube-system(505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j2fst" podUID="505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a" Jul 2 09:06:34.519376 kubelet[3236]: I0702 09:06:34.517826 3236 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:06:34.524528 containerd[1697]: time="2024-07-02T09:06:34.524468036Z" level=info msg="StopPodSandbox for \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\"" Jul 2 09:06:34.525053 containerd[1697]: time="2024-07-02T09:06:34.525004877Z" level=info msg="Ensure that sandbox 12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244 in task-service has been cleanup successfully" Jul 2 09:06:34.545697 kubelet[3236]: I0702 09:06:34.545637 3236 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:06:34.550484 containerd[1697]: time="2024-07-02T09:06:34.550426415Z" level=info msg="StopPodSandbox for \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\"" Jul 2 09:06:34.550691 containerd[1697]: time="2024-07-02T09:06:34.550662536Z" level=info msg="Ensure that sandbox 54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b in task-service has been cleanup successfully" Jul 2 09:06:34.560939 kubelet[3236]: I0702 09:06:34.560901 3236 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:06:34.563141 containerd[1697]: time="2024-07-02T09:06:34.563097924Z" level=info msg="StopPodSandbox for \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\"" Jul 2 09:06:34.563879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1-shm.mount: Deactivated successfully. Jul 2 09:06:34.565968 containerd[1697]: time="2024-07-02T09:06:34.565923090Z" level=info msg="Ensure that sandbox 249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05 in task-service has been cleanup successfully" Jul 2 09:06:34.570867 kubelet[3236]: I0702 09:06:34.570714 3236 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:06:34.572386 containerd[1697]: time="2024-07-02T09:06:34.572193065Z" level=info msg="StopPodSandbox for \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\"" Jul 2 09:06:34.572517 containerd[1697]: time="2024-07-02T09:06:34.572436265Z" level=info msg="Ensure that sandbox c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1 in task-service has been cleanup successfully" Jul 2 09:06:34.628751 containerd[1697]: time="2024-07-02T09:06:34.628655273Z" level=error msg="StopPodSandbox for \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\" failed" error="failed to destroy network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:34.629154 kubelet[3236]: E0702 09:06:34.629066 3236 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:06:34.629154 kubelet[3236]: E0702 09:06:34.629151 3236 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244"} Jul 2 09:06:34.629243 kubelet[3236]: E0702 09:06:34.629187 3236 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7c64a89-d23b-4bef-9c27-bbb0ad23595e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:06:34.629243 kubelet[3236]: E0702 09:06:34.629216 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7c64a89-d23b-4bef-9c27-bbb0ad23595e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zctz4" podUID="c7c64a89-d23b-4bef-9c27-bbb0ad23595e" Jul 2 09:06:34.645513 containerd[1697]: time="2024-07-02T09:06:34.645311111Z" level=error msg="StopPodSandbox for \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\" failed" error="failed to destroy network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:34.645989 kubelet[3236]: E0702 09:06:34.645825 3236 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:06:34.645989 kubelet[3236]: E0702 09:06:34.645941 3236 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b"} Jul 2 09:06:34.645989 kubelet[3236]: E0702 09:06:34.645985 3236 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"667beeb3-e864-4cf6-8526-e51c0263f76a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:06:34.646159 kubelet[3236]: E0702 09:06:34.646018 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"667beeb3-e864-4cf6-8526-e51c0263f76a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rpz5d" podUID="667beeb3-e864-4cf6-8526-e51c0263f76a" Jul 2 09:06:34.650975 containerd[1697]: time="2024-07-02T09:06:34.650828484Z" level=error msg="StopPodSandbox for \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\" failed" error="failed to destroy network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:34.651121 kubelet[3236]: E0702 09:06:34.651083 3236 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:06:34.651163 kubelet[3236]: E0702 09:06:34.651126 3236 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05"} Jul 2 09:06:34.651195 kubelet[3236]: E0702 09:06:34.651176 3236 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:06:34.651253 kubelet[3236]: E0702 09:06:34.651204 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75647d58f7-5rvzc" podUID="105f3c79-8c9f-4f4b-b6cd-24afabef9d5e" Jul 2 09:06:34.655627 containerd[1697]: time="2024-07-02T09:06:34.655009133Z" level=error msg="StopPodSandbox for \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\" failed" error="failed to destroy network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:06:34.655759 kubelet[3236]: E0702 09:06:34.655332 3236 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:06:34.655759 kubelet[3236]: E0702 09:06:34.655441 3236 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1"} Jul 2 09:06:34.655759 kubelet[3236]: E0702 09:06:34.655479 3236 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:06:34.655759 kubelet[3236]: E0702 09:06:34.655508 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j2fst" podUID="505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a" Jul 2 09:06:37.184986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393431776.mount: Deactivated successfully. Jul 2 09:06:37.576247 containerd[1697]: time="2024-07-02T09:06:37.576192747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:37.577999 containerd[1697]: time="2024-07-02T09:06:37.577842951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 09:06:37.581018 containerd[1697]: time="2024-07-02T09:06:37.580946558Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:37.584973 containerd[1697]: time="2024-07-02T09:06:37.584892927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:37.585975 containerd[1697]: time="2024-07-02T09:06:37.585475168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 4.073361398s" Jul 2 09:06:37.585975 containerd[1697]: time="2024-07-02T09:06:37.585517168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 09:06:37.602811 containerd[1697]: time="2024-07-02T09:06:37.602761728Z" level=info msg="CreateContainer within sandbox \"0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 09:06:37.646416 containerd[1697]: time="2024-07-02T09:06:37.646338307Z" level=info msg="CreateContainer within sandbox \"0863f17c0a34f90e3f450e0f2a2e36353f88fa44d675381cd408fc5def6c1667\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bca64991b945641fc3fc9716fcf94d36a4b030f1a669b5c3460c350762149946\"" Jul 2 09:06:37.648741 containerd[1697]: time="2024-07-02T09:06:37.647163429Z" level=info msg="StartContainer for \"bca64991b945641fc3fc9716fcf94d36a4b030f1a669b5c3460c350762149946\"" Jul 2 09:06:37.677578 systemd[1]: Started cri-containerd-bca64991b945641fc3fc9716fcf94d36a4b030f1a669b5c3460c350762149946.scope - libcontainer container bca64991b945641fc3fc9716fcf94d36a4b030f1a669b5c3460c350762149946. Jul 2 09:06:37.712504 containerd[1697]: time="2024-07-02T09:06:37.712447177Z" level=info msg="StartContainer for \"bca64991b945641fc3fc9716fcf94d36a4b030f1a669b5c3460c350762149946\" returns successfully" Jul 2 09:06:37.886380 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 09:06:37.886771 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 09:06:38.593913 kubelet[3236]: I0702 09:06:38.593877 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9nqkn" podStartSLOduration=2.840625029 podStartE2EDuration="15.593816065s" podCreationTimestamp="2024-07-02 09:06:23 +0000 UTC" firstStartedPulling="2024-07-02 09:06:24.833017534 +0000 UTC m=+22.548603677" lastFinishedPulling="2024-07-02 09:06:37.58620857 +0000 UTC m=+35.301794713" observedRunningTime="2024-07-02 09:06:38.591959861 +0000 UTC m=+36.307545964" watchObservedRunningTime="2024-07-02 09:06:38.593816065 +0000 UTC m=+36.309402168" Jul 2 09:06:39.623736 systemd-networkd[1324]: vxlan.calico: Link UP Jul 2 09:06:39.623744 systemd-networkd[1324]: vxlan.calico: Gained carrier Jul 2 09:06:40.153606 kubelet[3236]: I0702 09:06:40.153556 3236 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:06:41.296571 systemd-networkd[1324]: vxlan.calico: Gained IPv6LL Jul 2 09:06:45.381968 containerd[1697]: time="2024-07-02T09:06:45.381652914Z" level=info msg="StopPodSandbox for \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\"" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.435 [INFO][4471] k8s.go 608: Cleaning up netns ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.435 [INFO][4471] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" iface="eth0" netns="/var/run/netns/cni-869a50fc-1f19-7f58-3afb-ad21d1459052" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.435 [INFO][4471] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" iface="eth0" netns="/var/run/netns/cni-869a50fc-1f19-7f58-3afb-ad21d1459052" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.435 [INFO][4471] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" iface="eth0" netns="/var/run/netns/cni-869a50fc-1f19-7f58-3afb-ad21d1459052" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.435 [INFO][4471] k8s.go 615: Releasing IP address(es) ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.435 [INFO][4471] utils.go 188: Calico CNI releasing IP address ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.456 [INFO][4477] ipam_plugin.go 411: Releasing address using handleID ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.456 [INFO][4477] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.456 [INFO][4477] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.465 [WARNING][4477] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.465 [INFO][4477] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.467 [INFO][4477] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:45.471620 containerd[1697]: 2024-07-02 09:06:45.469 [INFO][4471] k8s.go 621: Teardown processing complete. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:06:45.471620 containerd[1697]: time="2024-07-02T09:06:45.471297488Z" level=info msg="TearDown network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\" successfully" Jul 2 09:06:45.471620 containerd[1697]: time="2024-07-02T09:06:45.471327288Z" level=info msg="StopPodSandbox for \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\" returns successfully" Jul 2 09:06:45.473657 systemd[1]: run-netns-cni\x2d869a50fc\x2d1f19\x2d7f58\x2d3afb\x2dad21d1459052.mount: Deactivated successfully. Jul 2 09:06:45.474963 containerd[1697]: time="2024-07-02T09:06:45.474924976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j2fst,Uid:505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a,Namespace:kube-system,Attempt:1,}" Jul 2 09:06:45.611304 systemd-networkd[1324]: cali8a68caff155: Link UP Jul 2 09:06:45.611524 systemd-networkd[1324]: cali8a68caff155: Gained carrier Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.544 [INFO][4484] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0 coredns-76f75df574- kube-system 505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a 684 0 2024-07-02 09:06:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-59f2e70dce coredns-76f75df574-j2fst eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a68caff155 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.544 [INFO][4484] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.572 [INFO][4495] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" HandleID="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.582 [INFO][4495] ipam_plugin.go 264: Auto assigning IP ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" HandleID="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059de30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-59f2e70dce", "pod":"coredns-76f75df574-j2fst", "timestamp":"2024-07-02 09:06:45.572182448 +0000 UTC"}, Hostname:"ci-3975.1.1-a-59f2e70dce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.582 [INFO][4495] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.582 [INFO][4495] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.582 [INFO][4495] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-59f2e70dce' Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.584 [INFO][4495] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.587 [INFO][4495] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.591 [INFO][4495] ipam.go 489: Trying affinity for 192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.593 [INFO][4495] ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.595 [INFO][4495] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.595 [INFO][4495] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.596 [INFO][4495] ipam.go 1685: Creating new handle: k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.599 [INFO][4495] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.604 [INFO][4495] ipam.go 1216: Successfully claimed IPs: [192.168.73.1/26] block=192.168.73.0/26 handle="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.604 [INFO][4495] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.1/26] handle="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.604 [INFO][4495] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:45.630815 containerd[1697]: 2024-07-02 09:06:45.604 [INFO][4495] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.73.1/26] IPv6=[] ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" HandleID="k8s-pod-network.2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.632787 containerd[1697]: 2024-07-02 09:06:45.608 [INFO][4484] k8s.go 386: Populated endpoint ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"", Pod:"coredns-76f75df574-j2fst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a68caff155", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:45.632787 containerd[1697]: 2024-07-02 09:06:45.608 [INFO][4484] k8s.go 387: Calico CNI using IPs: [192.168.73.1/32] ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.632787 containerd[1697]: 2024-07-02 09:06:45.608 [INFO][4484] dataplane_linux.go 68: Setting the host side veth name to cali8a68caff155 ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.632787 containerd[1697]: 2024-07-02 09:06:45.612 [INFO][4484] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.632787 containerd[1697]: 2024-07-02 09:06:45.612 [INFO][4484] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed", Pod:"coredns-76f75df574-j2fst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a68caff155", MAC:"26:d6:f6:1f:e7:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:45.632787 containerd[1697]: 2024-07-02 09:06:45.624 [INFO][4484] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed" Namespace="kube-system" Pod="coredns-76f75df574-j2fst" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:06:45.658237 containerd[1697]: time="2024-07-02T09:06:45.658050812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:45.658237 containerd[1697]: time="2024-07-02T09:06:45.658116772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:45.658237 containerd[1697]: time="2024-07-02T09:06:45.658136332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:45.658237 containerd[1697]: time="2024-07-02T09:06:45.658149372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:45.683564 systemd[1]: Started cri-containerd-2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed.scope - libcontainer container 2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed. Jul 2 09:06:45.719331 containerd[1697]: time="2024-07-02T09:06:45.719284998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j2fst,Uid:505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a,Namespace:kube-system,Attempt:1,} returns sandbox id \"2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed\"" Jul 2 09:06:45.723981 containerd[1697]: time="2024-07-02T09:06:45.723324208Z" level=info msg="CreateContainer within sandbox \"2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:06:45.753007 containerd[1697]: time="2024-07-02T09:06:45.752953758Z" level=info msg="CreateContainer within sandbox \"2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b69b9dd1b18c2323ee39cadd4c5d91a4f5ae4516b5ebfbe4d009c2611f6014cd\"" Jul 2 09:06:45.753632 containerd[1697]: time="2024-07-02T09:06:45.753587120Z" level=info msg="StartContainer for \"b69b9dd1b18c2323ee39cadd4c5d91a4f5ae4516b5ebfbe4d009c2611f6014cd\"" Jul 2 09:06:45.778551 systemd[1]: Started cri-containerd-b69b9dd1b18c2323ee39cadd4c5d91a4f5ae4516b5ebfbe4d009c2611f6014cd.scope - libcontainer container b69b9dd1b18c2323ee39cadd4c5d91a4f5ae4516b5ebfbe4d009c2611f6014cd. Jul 2 09:06:45.806863 containerd[1697]: time="2024-07-02T09:06:45.806672966Z" level=info msg="StartContainer for \"b69b9dd1b18c2323ee39cadd4c5d91a4f5ae4516b5ebfbe4d009c2611f6014cd\" returns successfully" Jul 2 09:06:46.383320 containerd[1697]: time="2024-07-02T09:06:46.383184899Z" level=info msg="StopPodSandbox for \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\"" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.427 [INFO][4606] k8s.go 608: Cleaning up netns ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.428 [INFO][4606] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" iface="eth0" netns="/var/run/netns/cni-34807e87-dac6-57bc-57cf-751136099972" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.429 [INFO][4606] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" iface="eth0" netns="/var/run/netns/cni-34807e87-dac6-57bc-57cf-751136099972" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.429 [INFO][4606] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" iface="eth0" netns="/var/run/netns/cni-34807e87-dac6-57bc-57cf-751136099972" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.429 [INFO][4606] k8s.go 615: Releasing IP address(es) ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.429 [INFO][4606] utils.go 188: Calico CNI releasing IP address ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.449 [INFO][4613] ipam_plugin.go 411: Releasing address using handleID ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.449 [INFO][4613] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.449 [INFO][4613] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.459 [WARNING][4613] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.459 [INFO][4613] ipam_plugin.go 439: Releasing address using workloadID ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.460 [INFO][4613] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:46.464394 containerd[1697]: 2024-07-02 09:06:46.462 [INFO][4606] k8s.go 621: Teardown processing complete. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:06:46.465989 containerd[1697]: time="2024-07-02T09:06:46.464518692Z" level=info msg="TearDown network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\" successfully" Jul 2 09:06:46.465989 containerd[1697]: time="2024-07-02T09:06:46.464557573Z" level=info msg="StopPodSandbox for \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\" returns successfully" Jul 2 09:06:46.465989 containerd[1697]: time="2024-07-02T09:06:46.465369894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75647d58f7-5rvzc,Uid:105f3c79-8c9f-4f4b-b6cd-24afabef9d5e,Namespace:calico-system,Attempt:1,}" Jul 2 09:06:46.475506 systemd[1]: run-netns-cni\x2d34807e87\x2ddac6\x2d57bc\x2d57cf\x2d751136099972.mount: Deactivated successfully. Jul 2 09:06:46.613625 kubelet[3236]: I0702 09:06:46.613567 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-j2fst" podStartSLOduration=29.613332887 podStartE2EDuration="29.613332887s" podCreationTimestamp="2024-07-02 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:06:46.612980326 +0000 UTC m=+44.328566469" watchObservedRunningTime="2024-07-02 09:06:46.613332887 +0000 UTC m=+44.328919030" Jul 2 09:06:46.620662 systemd-networkd[1324]: cali0adb2903577: Link UP Jul 2 09:06:46.622922 systemd-networkd[1324]: cali0adb2903577: Gained carrier Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.539 [INFO][4623] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0 calico-kube-controllers-75647d58f7- calico-system 105f3c79-8c9f-4f4b-b6cd-24afabef9d5e 693 0 2024-07-02 09:06:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75647d58f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.1.1-a-59f2e70dce calico-kube-controllers-75647d58f7-5rvzc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0adb2903577 [] []}} ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.539 [INFO][4623] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.567 [INFO][4630] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" HandleID="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.580 [INFO][4630] ipam_plugin.go 264: Auto assigning IP ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" HandleID="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000261d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-59f2e70dce", "pod":"calico-kube-controllers-75647d58f7-5rvzc", "timestamp":"2024-07-02 09:06:46.566976536 +0000 UTC"}, Hostname:"ci-3975.1.1-a-59f2e70dce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.580 [INFO][4630] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.580 [INFO][4630] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.580 [INFO][4630] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-59f2e70dce' Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.582 [INFO][4630] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.586 [INFO][4630] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.590 [INFO][4630] ipam.go 489: Trying affinity for 192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.592 [INFO][4630] ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.597 [INFO][4630] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.597 [INFO][4630] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.600 [INFO][4630] ipam.go 1685: Creating new handle: k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996 Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.603 [INFO][4630] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.609 [INFO][4630] ipam.go 1216: Successfully claimed IPs: [192.168.73.2/26] block=192.168.73.0/26 handle="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.609 [INFO][4630] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.2/26] handle="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.609 [INFO][4630] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:46.650826 containerd[1697]: 2024-07-02 09:06:46.610 [INFO][4630] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.73.2/26] IPv6=[] ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" HandleID="k8s-pod-network.b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.653731 containerd[1697]: 2024-07-02 09:06:46.615 [INFO][4623] k8s.go 386: Populated endpoint ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0", GenerateName:"calico-kube-controllers-75647d58f7-", Namespace:"calico-system", SelfLink:"", UID:"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75647d58f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"", Pod:"calico-kube-controllers-75647d58f7-5rvzc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0adb2903577", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:46.653731 containerd[1697]: 2024-07-02 09:06:46.615 [INFO][4623] k8s.go 387: Calico CNI using IPs: [192.168.73.2/32] ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.653731 containerd[1697]: 2024-07-02 09:06:46.615 [INFO][4623] dataplane_linux.go 68: Setting the host side veth name to cali0adb2903577 ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.653731 containerd[1697]: 2024-07-02 09:06:46.622 [INFO][4623] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.653731 containerd[1697]: 2024-07-02 09:06:46.624 [INFO][4623] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0", GenerateName:"calico-kube-controllers-75647d58f7-", Namespace:"calico-system", SelfLink:"", UID:"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75647d58f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996", Pod:"calico-kube-controllers-75647d58f7-5rvzc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0adb2903577", MAC:"7a:dd:e8:15:46:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:46.653731 containerd[1697]: 2024-07-02 09:06:46.647 [INFO][4623] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996" Namespace="calico-system" Pod="calico-kube-controllers-75647d58f7-5rvzc" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:06:46.682228 containerd[1697]: time="2024-07-02T09:06:46.682055810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:46.682228 containerd[1697]: time="2024-07-02T09:06:46.682121771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:46.682228 containerd[1697]: time="2024-07-02T09:06:46.682150691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:46.682228 containerd[1697]: time="2024-07-02T09:06:46.682164171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:46.717591 systemd[1]: Started cri-containerd-b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996.scope - libcontainer container b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996. Jul 2 09:06:46.750466 containerd[1697]: time="2024-07-02T09:06:46.750399253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75647d58f7-5rvzc,Uid:105f3c79-8c9f-4f4b-b6cd-24afabef9d5e,Namespace:calico-system,Attempt:1,} returns sandbox id \"b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996\"" Jul 2 09:06:46.753146 containerd[1697]: time="2024-07-02T09:06:46.753103700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 09:06:47.474657 systemd[1]: run-containerd-runc-k8s.io-b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996-runc.C183gS.mount: Deactivated successfully. Jul 2 09:06:47.568890 systemd-networkd[1324]: cali8a68caff155: Gained IPv6LL Jul 2 09:06:48.080612 systemd-networkd[1324]: cali0adb2903577: Gained IPv6LL Jul 2 09:06:48.384266 containerd[1697]: time="2024-07-02T09:06:48.383766542Z" level=info msg="StopPodSandbox for \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\"" Jul 2 09:06:48.386536 containerd[1697]: time="2024-07-02T09:06:48.385269226Z" level=info msg="StopPodSandbox for \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\"" Jul 2 09:06:48.451375 containerd[1697]: time="2024-07-02T09:06:48.450857542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:48.453919 containerd[1697]: time="2024-07-02T09:06:48.453864429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 09:06:48.457123 containerd[1697]: time="2024-07-02T09:06:48.457079237Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:48.462982 containerd[1697]: time="2024-07-02T09:06:48.462921891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:48.464397 containerd[1697]: time="2024-07-02T09:06:48.463657613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.709666231s" Jul 2 09:06:48.464397 containerd[1697]: time="2024-07-02T09:06:48.464027973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 09:06:48.495469 containerd[1697]: time="2024-07-02T09:06:48.495415846Z" level=info msg="CreateContainer within sandbox \"b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 09:06:48.531221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345127016.mount: Deactivated successfully. Jul 2 09:06:48.541757 containerd[1697]: time="2024-07-02T09:06:48.541700230Z" level=info msg="CreateContainer within sandbox \"b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"99e1eaea86966a5f2f8476915ebd26d51ebba5106cf1b08d12ff8bff07d847aa\"" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.447 [INFO][4730] k8s.go 608: Cleaning up netns ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.447 [INFO][4730] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" iface="eth0" netns="/var/run/netns/cni-5c20707a-fd3e-6728-420a-1d6e149f4cc7" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.448 [INFO][4730] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" iface="eth0" netns="/var/run/netns/cni-5c20707a-fd3e-6728-420a-1d6e149f4cc7" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.449 [INFO][4730] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" iface="eth0" netns="/var/run/netns/cni-5c20707a-fd3e-6728-420a-1d6e149f4cc7" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.449 [INFO][4730] k8s.go 615: Releasing IP address(es) ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.449 [INFO][4730] utils.go 188: Calico CNI releasing IP address ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.514 [INFO][4742] ipam_plugin.go 411: Releasing address using handleID ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.514 [INFO][4742] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.514 [INFO][4742] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.537 [WARNING][4742] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.537 [INFO][4742] ipam_plugin.go 439: Releasing address using workloadID ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.539 [INFO][4742] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:48.543225 containerd[1697]: 2024-07-02 09:06:48.541 [INFO][4730] k8s.go 621: Teardown processing complete. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:06:48.549938 containerd[1697]: time="2024-07-02T09:06:48.547530043Z" level=info msg="StartContainer for \"99e1eaea86966a5f2f8476915ebd26d51ebba5106cf1b08d12ff8bff07d847aa\"" Jul 2 09:06:48.548222 systemd[1]: run-netns-cni\x2d5c20707a\x2dfd3e\x2d6728\x2d420a\x2d1d6e149f4cc7.mount: Deactivated successfully. Jul 2 09:06:48.550435 containerd[1697]: time="2024-07-02T09:06:48.550335729Z" level=info msg="TearDown network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\" successfully" Jul 2 09:06:48.550542 containerd[1697]: time="2024-07-02T09:06:48.550526730Z" level=info msg="StopPodSandbox for \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\" returns successfully" Jul 2 09:06:48.552148 containerd[1697]: time="2024-07-02T09:06:48.552108813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rpz5d,Uid:667beeb3-e864-4cf6-8526-e51c0263f76a,Namespace:kube-system,Attempt:1,}" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.469 [INFO][4731] k8s.go 608: Cleaning up netns ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.472 [INFO][4731] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" iface="eth0" netns="/var/run/netns/cni-d0671a49-5321-44d6-7aa3-d56032de4b42" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.472 [INFO][4731] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" iface="eth0" netns="/var/run/netns/cni-d0671a49-5321-44d6-7aa3-d56032de4b42" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.473 [INFO][4731] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" iface="eth0" netns="/var/run/netns/cni-d0671a49-5321-44d6-7aa3-d56032de4b42" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.473 [INFO][4731] k8s.go 615: Releasing IP address(es) ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.473 [INFO][4731] utils.go 188: Calico CNI releasing IP address ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.524 [INFO][4747] ipam_plugin.go 411: Releasing address using handleID ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.524 [INFO][4747] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.539 [INFO][4747] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.555 [WARNING][4747] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.555 [INFO][4747] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.558 [INFO][4747] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:48.565439 containerd[1697]: 2024-07-02 09:06:48.562 [INFO][4731] k8s.go 621: Teardown processing complete. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:06:48.566478 containerd[1697]: time="2024-07-02T09:06:48.566441085Z" level=info msg="TearDown network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\" successfully" Jul 2 09:06:48.567695 containerd[1697]: time="2024-07-02T09:06:48.566532686Z" level=info msg="StopPodSandbox for \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\" returns successfully" Jul 2 09:06:48.567885 containerd[1697]: time="2024-07-02T09:06:48.567856849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zctz4,Uid:c7c64a89-d23b-4bef-9c27-bbb0ad23595e,Namespace:calico-system,Attempt:1,}" Jul 2 09:06:48.586613 systemd[1]: Started cri-containerd-99e1eaea86966a5f2f8476915ebd26d51ebba5106cf1b08d12ff8bff07d847aa.scope - libcontainer container 99e1eaea86966a5f2f8476915ebd26d51ebba5106cf1b08d12ff8bff07d847aa. Jul 2 09:06:48.676544 containerd[1697]: time="2024-07-02T09:06:48.676427213Z" level=info msg="StartContainer for \"99e1eaea86966a5f2f8476915ebd26d51ebba5106cf1b08d12ff8bff07d847aa\" returns successfully" Jul 2 09:06:48.792245 systemd-networkd[1324]: cali4b99e83228c: Link UP Jul 2 09:06:48.793785 systemd-networkd[1324]: cali4b99e83228c: Gained carrier Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.640 [INFO][4772] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0 coredns-76f75df574- kube-system 667beeb3-e864-4cf6-8526-e51c0263f76a 711 0 2024-07-02 09:06:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-59f2e70dce coredns-76f75df574-rpz5d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b99e83228c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.641 [INFO][4772] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.708 [INFO][4808] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" HandleID="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.723 [INFO][4808] ipam_plugin.go 264: Auto assigning IP ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" HandleID="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-59f2e70dce", "pod":"coredns-76f75df574-rpz5d", "timestamp":"2024-07-02 09:06:48.708622765 +0000 UTC"}, Hostname:"ci-3975.1.1-a-59f2e70dce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.724 [INFO][4808] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.724 [INFO][4808] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.726 [INFO][4808] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-59f2e70dce' Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.737 [INFO][4808] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.748 [INFO][4808] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.756 [INFO][4808] ipam.go 489: Trying affinity for 192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.758 [INFO][4808] ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.763 [INFO][4808] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.763 [INFO][4808] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.770 [INFO][4808] ipam.go 1685: Creating new handle: k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381 Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.775 [INFO][4808] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.782 [INFO][4808] ipam.go 1216: Successfully claimed IPs: [192.168.73.3/26] block=192.168.73.0/26 handle="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.782 [INFO][4808] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.3/26] handle="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.782 [INFO][4808] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:48.826467 containerd[1697]: 2024-07-02 09:06:48.782 [INFO][4808] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.73.3/26] IPv6=[] ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" HandleID="k8s-pod-network.a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.827291 containerd[1697]: 2024-07-02 09:06:48.785 [INFO][4772] k8s.go 386: Populated endpoint ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"667beeb3-e864-4cf6-8526-e51c0263f76a", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"", Pod:"coredns-76f75df574-rpz5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b99e83228c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:48.827291 containerd[1697]: 2024-07-02 09:06:48.785 [INFO][4772] k8s.go 387: Calico CNI using IPs: [192.168.73.3/32] ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.827291 containerd[1697]: 2024-07-02 09:06:48.785 [INFO][4772] dataplane_linux.go 68: Setting the host side veth name to cali4b99e83228c ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.827291 containerd[1697]: 2024-07-02 09:06:48.793 [INFO][4772] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.827291 containerd[1697]: 2024-07-02 09:06:48.795 [INFO][4772] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"667beeb3-e864-4cf6-8526-e51c0263f76a", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381", Pod:"coredns-76f75df574-rpz5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b99e83228c", MAC:"ce:69:b9:17:1e:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:48.827291 containerd[1697]: 2024-07-02 09:06:48.823 [INFO][4772] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381" Namespace="kube-system" Pod="coredns-76f75df574-rpz5d" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:06:48.859730 systemd-networkd[1324]: calibe71158417e: Link UP Jul 2 09:06:48.863505 systemd-networkd[1324]: calibe71158417e: Gained carrier Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.684 [INFO][4792] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0 csi-node-driver- calico-system c7c64a89-d23b-4bef-9c27-bbb0ad23595e 712 0 2024-07-02 09:06:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.1.1-a-59f2e70dce csi-node-driver-zctz4 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calibe71158417e [] []}} ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.684 [INFO][4792] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.739 [INFO][4815] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" HandleID="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.761 [INFO][4815] ipam_plugin.go 264: Auto assigning IP ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" HandleID="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-59f2e70dce", "pod":"csi-node-driver-zctz4", "timestamp":"2024-07-02 09:06:48.739016034 +0000 UTC"}, Hostname:"ci-3975.1.1-a-59f2e70dce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.761 [INFO][4815] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.783 [INFO][4815] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.783 [INFO][4815] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-59f2e70dce' Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.788 [INFO][4815] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.817 [INFO][4815] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.831 [INFO][4815] ipam.go 489: Trying affinity for 192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.834 [INFO][4815] ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.837 [INFO][4815] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.837 [INFO][4815] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.839 [INFO][4815] ipam.go 1685: Creating new handle: k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.844 [INFO][4815] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.852 [INFO][4815] ipam.go 1216: Successfully claimed IPs: [192.168.73.4/26] block=192.168.73.0/26 handle="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.852 [INFO][4815] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.4/26] handle="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.852 [INFO][4815] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:06:48.899932 containerd[1697]: 2024-07-02 09:06:48.852 [INFO][4815] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.73.4/26] IPv6=[] ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" HandleID="k8s-pod-network.7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.901176 containerd[1697]: 2024-07-02 09:06:48.854 [INFO][4792] k8s.go 386: Populated endpoint ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7c64a89-d23b-4bef-9c27-bbb0ad23595e", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"", Pod:"csi-node-driver-zctz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibe71158417e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:48.901176 containerd[1697]: 2024-07-02 09:06:48.854 [INFO][4792] k8s.go 387: Calico CNI using IPs: [192.168.73.4/32] ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.901176 containerd[1697]: 2024-07-02 09:06:48.855 [INFO][4792] dataplane_linux.go 68: Setting the host side veth name to calibe71158417e ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.901176 containerd[1697]: 2024-07-02 09:06:48.865 [INFO][4792] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.901176 containerd[1697]: 2024-07-02 09:06:48.866 [INFO][4792] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7c64a89-d23b-4bef-9c27-bbb0ad23595e", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a", Pod:"csi-node-driver-zctz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibe71158417e", MAC:"8e:a1:1a:5e:2e:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:06:48.901176 containerd[1697]: 2024-07-02 09:06:48.895 [INFO][4792] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a" Namespace="calico-system" Pod="csi-node-driver-zctz4" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:06:48.918481 containerd[1697]: time="2024-07-02T09:06:48.917408955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:48.918481 containerd[1697]: time="2024-07-02T09:06:48.917474716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:48.918481 containerd[1697]: time="2024-07-02T09:06:48.917511076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:48.918481 containerd[1697]: time="2024-07-02T09:06:48.917542356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:48.944613 systemd[1]: Started cri-containerd-a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381.scope - libcontainer container a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381. Jul 2 09:06:48.971325 containerd[1697]: time="2024-07-02T09:06:48.971206356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:06:48.972423 containerd[1697]: time="2024-07-02T09:06:48.972077078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:48.972423 containerd[1697]: time="2024-07-02T09:06:48.972106439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:06:48.972423 containerd[1697]: time="2024-07-02T09:06:48.972226679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:06:49.002096 systemd[1]: Started cri-containerd-7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a.scope - libcontainer container 7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a. Jul 2 09:06:49.015843 containerd[1697]: time="2024-07-02T09:06:49.015532536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rpz5d,Uid:667beeb3-e864-4cf6-8526-e51c0263f76a,Namespace:kube-system,Attempt:1,} returns sandbox id \"a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381\"" Jul 2 09:06:49.024167 containerd[1697]: time="2024-07-02T09:06:49.024111396Z" level=info msg="CreateContainer within sandbox \"a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:06:49.048670 containerd[1697]: time="2024-07-02T09:06:49.048634091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zctz4,Uid:c7c64a89-d23b-4bef-9c27-bbb0ad23595e,Namespace:calico-system,Attempt:1,} returns sandbox id \"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a\"" Jul 2 09:06:49.052810 containerd[1697]: time="2024-07-02T09:06:49.052755220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 09:06:49.075673 containerd[1697]: time="2024-07-02T09:06:49.075554471Z" level=info msg="CreateContainer within sandbox \"a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"820c20d042c8cff5aba8fb28d36f884d4748221a881df6bf2284f5f5379b2b63\"" Jul 2 09:06:49.078763 containerd[1697]: time="2024-07-02T09:06:49.077776116Z" level=info msg="StartContainer for \"820c20d042c8cff5aba8fb28d36f884d4748221a881df6bf2284f5f5379b2b63\"" Jul 2 09:06:49.116154 systemd[1]: Started cri-containerd-820c20d042c8cff5aba8fb28d36f884d4748221a881df6bf2284f5f5379b2b63.scope - libcontainer container 820c20d042c8cff5aba8fb28d36f884d4748221a881df6bf2284f5f5379b2b63. Jul 2 09:06:49.166910 containerd[1697]: time="2024-07-02T09:06:49.166833877Z" level=info msg="StartContainer for \"820c20d042c8cff5aba8fb28d36f884d4748221a881df6bf2284f5f5379b2b63\" returns successfully" Jul 2 09:06:49.489433 systemd[1]: run-netns-cni\x2dd0671a49\x2d5321\x2d44d6\x2d7aa3\x2dd56032de4b42.mount: Deactivated successfully. Jul 2 09:06:49.672579 kubelet[3236]: I0702 09:06:49.672519 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rpz5d" podStartSLOduration=32.672473375 podStartE2EDuration="32.672473375s" podCreationTimestamp="2024-07-02 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:06:49.642267307 +0000 UTC m=+47.357853490" watchObservedRunningTime="2024-07-02 09:06:49.672473375 +0000 UTC m=+47.388059518" Jul 2 09:06:49.732181 kubelet[3236]: I0702 09:06:49.731659 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75647d58f7-5rvzc" podStartSLOduration=25.01906355 podStartE2EDuration="26.731350387s" podCreationTimestamp="2024-07-02 09:06:23 +0000 UTC" firstStartedPulling="2024-07-02 09:06:46.752433138 +0000 UTC m=+44.468019281" lastFinishedPulling="2024-07-02 09:06:48.464719975 +0000 UTC m=+46.180306118" observedRunningTime="2024-07-02 09:06:49.699544156 +0000 UTC m=+47.415130299" watchObservedRunningTime="2024-07-02 09:06:49.731350387 +0000 UTC m=+47.446936530" Jul 2 09:06:50.064784 systemd-networkd[1324]: cali4b99e83228c: Gained IPv6LL Jul 2 09:06:50.180979 containerd[1697]: time="2024-07-02T09:06:50.180227318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:50.182462 containerd[1697]: time="2024-07-02T09:06:50.182395643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 09:06:50.186749 containerd[1697]: time="2024-07-02T09:06:50.186664852Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:50.191838 containerd[1697]: time="2024-07-02T09:06:50.191746064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:50.192874 containerd[1697]: time="2024-07-02T09:06:50.192431705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.139631405s" Jul 2 09:06:50.192874 containerd[1697]: time="2024-07-02T09:06:50.192477225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 09:06:50.194631 containerd[1697]: time="2024-07-02T09:06:50.194579390Z" level=info msg="CreateContainer within sandbox \"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 09:06:50.230811 containerd[1697]: time="2024-07-02T09:06:50.230702151Z" level=info msg="CreateContainer within sandbox \"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"73db1fa1611b8fdcd11dcc8dec71f56ccedc813b7dd9236db59e1ae71d267320\"" Jul 2 09:06:50.233006 containerd[1697]: time="2024-07-02T09:06:50.231516553Z" level=info msg="StartContainer for \"73db1fa1611b8fdcd11dcc8dec71f56ccedc813b7dd9236db59e1ae71d267320\"" Jul 2 09:06:50.261603 systemd[1]: Started cri-containerd-73db1fa1611b8fdcd11dcc8dec71f56ccedc813b7dd9236db59e1ae71d267320.scope - libcontainer container 73db1fa1611b8fdcd11dcc8dec71f56ccedc813b7dd9236db59e1ae71d267320. Jul 2 09:06:50.301274 containerd[1697]: time="2024-07-02T09:06:50.301222750Z" level=info msg="StartContainer for \"73db1fa1611b8fdcd11dcc8dec71f56ccedc813b7dd9236db59e1ae71d267320\" returns successfully" Jul 2 09:06:50.302412 containerd[1697]: time="2024-07-02T09:06:50.302347273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 09:06:50.320530 systemd-networkd[1324]: calibe71158417e: Gained IPv6LL Jul 2 09:06:51.459426 containerd[1697]: time="2024-07-02T09:06:51.458872476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:51.460882 containerd[1697]: time="2024-07-02T09:06:51.460832760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 09:06:51.463888 containerd[1697]: time="2024-07-02T09:06:51.463813647Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:51.467015 containerd[1697]: time="2024-07-02T09:06:51.466943174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:06:51.468131 containerd[1697]: time="2024-07-02T09:06:51.467623816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.165206742s" Jul 2 09:06:51.468131 containerd[1697]: time="2024-07-02T09:06:51.467665376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 09:06:51.471109 containerd[1697]: time="2024-07-02T09:06:51.471044583Z" level=info msg="CreateContainer within sandbox \"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 09:06:51.505859 containerd[1697]: time="2024-07-02T09:06:51.505794261Z" level=info msg="CreateContainer within sandbox \"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8c9a3dd2b8370c237e1313e15cfb11e65d899881f413895a9a9638b20f5f0501\"" Jul 2 09:06:51.507895 containerd[1697]: time="2024-07-02T09:06:51.506769864Z" level=info msg="StartContainer for \"8c9a3dd2b8370c237e1313e15cfb11e65d899881f413895a9a9638b20f5f0501\"" Jul 2 09:06:51.549660 systemd[1]: Started cri-containerd-8c9a3dd2b8370c237e1313e15cfb11e65d899881f413895a9a9638b20f5f0501.scope - libcontainer container 8c9a3dd2b8370c237e1313e15cfb11e65d899881f413895a9a9638b20f5f0501. Jul 2 09:06:51.583815 containerd[1697]: time="2024-07-02T09:06:51.583705557Z" level=info msg="StartContainer for \"8c9a3dd2b8370c237e1313e15cfb11e65d899881f413895a9a9638b20f5f0501\" returns successfully" Jul 2 09:06:51.647669 kubelet[3236]: I0702 09:06:51.646656 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-zctz4" podStartSLOduration=26.229494538 podStartE2EDuration="28.646611978s" podCreationTimestamp="2024-07-02 09:06:23 +0000 UTC" firstStartedPulling="2024-07-02 09:06:49.050809816 +0000 UTC m=+46.766395919" lastFinishedPulling="2024-07-02 09:06:51.467927216 +0000 UTC m=+49.183513359" observedRunningTime="2024-07-02 09:06:51.646086897 +0000 UTC m=+49.361673040" watchObservedRunningTime="2024-07-02 09:06:51.646611978 +0000 UTC m=+49.362198121" Jul 2 09:06:52.493923 kubelet[3236]: I0702 09:06:52.493821 3236 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 09:06:52.493923 kubelet[3236]: I0702 09:06:52.493858 3236 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 09:07:02.377933 containerd[1697]: time="2024-07-02T09:07:02.377893709Z" level=info msg="StopPodSandbox for \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\"" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.419 [WARNING][5115] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed", Pod:"coredns-76f75df574-j2fst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a68caff155", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.419 [INFO][5115] k8s.go 608: Cleaning up netns ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.419 [INFO][5115] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" iface="eth0" netns="" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.419 [INFO][5115] k8s.go 615: Releasing IP address(es) ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.419 [INFO][5115] utils.go 188: Calico CNI releasing IP address ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.440 [INFO][5123] ipam_plugin.go 411: Releasing address using handleID ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.441 [INFO][5123] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.441 [INFO][5123] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.449 [WARNING][5123] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.449 [INFO][5123] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.451 [INFO][5123] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:02.454700 containerd[1697]: 2024-07-02 09:07:02.452 [INFO][5115] k8s.go 621: Teardown processing complete. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.455393 containerd[1697]: time="2024-07-02T09:07:02.455211044Z" level=info msg="TearDown network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\" successfully" Jul 2 09:07:02.455393 containerd[1697]: time="2024-07-02T09:07:02.455258124Z" level=info msg="StopPodSandbox for \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\" returns successfully" Jul 2 09:07:02.457218 containerd[1697]: time="2024-07-02T09:07:02.457143928Z" level=info msg="RemovePodSandbox for \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\"" Jul 2 09:07:02.457331 containerd[1697]: time="2024-07-02T09:07:02.457211048Z" level=info msg="Forcibly stopping sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\"" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.493 [WARNING][5142] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"505b2c6e-5cdd-4d6a-911b-bb1a7ce2e04a", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"2c5027a9ef23980541a075170d0f7f42f60d027a41dac10997599a87140f78ed", Pod:"coredns-76f75df574-j2fst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a68caff155", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.494 [INFO][5142] k8s.go 608: Cleaning up netns ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.494 [INFO][5142] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" iface="eth0" netns="" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.494 [INFO][5142] k8s.go 615: Releasing IP address(es) ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.494 [INFO][5142] utils.go 188: Calico CNI releasing IP address ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.515 [INFO][5149] ipam_plugin.go 411: Releasing address using handleID ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.515 [INFO][5149] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.515 [INFO][5149] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.525 [WARNING][5149] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.525 [INFO][5149] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" HandleID="k8s-pod-network.c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--j2fst-eth0" Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.527 [INFO][5149] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:02.530932 containerd[1697]: 2024-07-02 09:07:02.529 [INFO][5142] k8s.go 621: Teardown processing complete. ContainerID="c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1" Jul 2 09:07:02.530932 containerd[1697]: time="2024-07-02T09:07:02.530809735Z" level=info msg="TearDown network for sandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\" successfully" Jul 2 09:07:02.539114 containerd[1697]: time="2024-07-02T09:07:02.539055994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:07:02.539262 containerd[1697]: time="2024-07-02T09:07:02.539174114Z" level=info msg="RemovePodSandbox \"c2a55b436db0beda06a417c316c62ff3c96682c06fa297de060336bf9501b4c1\" returns successfully" Jul 2 09:07:02.540120 containerd[1697]: time="2024-07-02T09:07:02.539793076Z" level=info msg="StopPodSandbox for \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\"" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.582 [WARNING][5168] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"667beeb3-e864-4cf6-8526-e51c0263f76a", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381", Pod:"coredns-76f75df574-rpz5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b99e83228c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.583 [INFO][5168] k8s.go 608: Cleaning up netns ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.583 [INFO][5168] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" iface="eth0" netns="" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.583 [INFO][5168] k8s.go 615: Releasing IP address(es) ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.583 [INFO][5168] utils.go 188: Calico CNI releasing IP address ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.632 [INFO][5180] ipam_plugin.go 411: Releasing address using handleID ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.632 [INFO][5180] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.633 [INFO][5180] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.646 [WARNING][5180] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.646 [INFO][5180] ipam_plugin.go 439: Releasing address using workloadID ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.647 [INFO][5180] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:02.651416 containerd[1697]: 2024-07-02 09:07:02.649 [INFO][5168] k8s.go 621: Teardown processing complete. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.651844 containerd[1697]: time="2024-07-02T09:07:02.651610049Z" level=info msg="TearDown network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\" successfully" Jul 2 09:07:02.651844 containerd[1697]: time="2024-07-02T09:07:02.651640529Z" level=info msg="StopPodSandbox for \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\" returns successfully" Jul 2 09:07:02.652425 containerd[1697]: time="2024-07-02T09:07:02.652376491Z" level=info msg="RemovePodSandbox for \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\"" Jul 2 09:07:02.652468 containerd[1697]: time="2024-07-02T09:07:02.652420731Z" level=info msg="Forcibly stopping sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\"" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.691 [WARNING][5213] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"667beeb3-e864-4cf6-8526-e51c0263f76a", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"a40886652591133df65228e43c689dec49ca134d1cf45ed5e57537fe189a1381", Pod:"coredns-76f75df574-rpz5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b99e83228c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.692 [INFO][5213] k8s.go 608: Cleaning up netns ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.692 [INFO][5213] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" iface="eth0" netns="" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.692 [INFO][5213] k8s.go 615: Releasing IP address(es) ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.692 [INFO][5213] utils.go 188: Calico CNI releasing IP address ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.713 [INFO][5219] ipam_plugin.go 411: Releasing address using handleID ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.713 [INFO][5219] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.713 [INFO][5219] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.723 [WARNING][5219] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.723 [INFO][5219] ipam_plugin.go 439: Releasing address using workloadID ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" HandleID="k8s-pod-network.54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Workload="ci--3975.1.1--a--59f2e70dce-k8s-coredns--76f75df574--rpz5d-eth0" Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.725 [INFO][5219] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:02.729521 containerd[1697]: 2024-07-02 09:07:02.728 [INFO][5213] k8s.go 621: Teardown processing complete. ContainerID="54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b" Jul 2 09:07:02.730043 containerd[1697]: time="2024-07-02T09:07:02.729572306Z" level=info msg="TearDown network for sandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\" successfully" Jul 2 09:07:02.734970 containerd[1697]: time="2024-07-02T09:07:02.734917278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:07:02.735098 containerd[1697]: time="2024-07-02T09:07:02.734993358Z" level=info msg="RemovePodSandbox \"54ad75528011048bb4a50ec7b05fdc35fb491093e705eeb1c83391358e9bb96b\" returns successfully" Jul 2 09:07:02.735805 containerd[1697]: time="2024-07-02T09:07:02.735591079Z" level=info msg="StopPodSandbox for \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\"" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.770 [WARNING][5239] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7c64a89-d23b-4bef-9c27-bbb0ad23595e", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a", Pod:"csi-node-driver-zctz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibe71158417e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.771 [INFO][5239] k8s.go 608: Cleaning up netns ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.771 [INFO][5239] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" iface="eth0" netns="" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.771 [INFO][5239] k8s.go 615: Releasing IP address(es) ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.771 [INFO][5239] utils.go 188: Calico CNI releasing IP address ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.791 [INFO][5245] ipam_plugin.go 411: Releasing address using handleID ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.791 [INFO][5245] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.791 [INFO][5245] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.799 [WARNING][5245] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.799 [INFO][5245] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.801 [INFO][5245] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:02.803766 containerd[1697]: 2024-07-02 09:07:02.802 [INFO][5239] k8s.go 621: Teardown processing complete. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.804392 containerd[1697]: time="2024-07-02T09:07:02.804260235Z" level=info msg="TearDown network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\" successfully" Jul 2 09:07:02.804392 containerd[1697]: time="2024-07-02T09:07:02.804308435Z" level=info msg="StopPodSandbox for \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\" returns successfully" Jul 2 09:07:02.805153 containerd[1697]: time="2024-07-02T09:07:02.804786796Z" level=info msg="RemovePodSandbox for \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\"" Jul 2 09:07:02.805153 containerd[1697]: time="2024-07-02T09:07:02.804847316Z" level=info msg="Forcibly stopping sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\"" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.841 [WARNING][5263] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7c64a89-d23b-4bef-9c27-bbb0ad23595e", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"7f2a30b108ddbc009e3b1b52edd14c339aa6beb33721efd76ad10addaf02052a", Pod:"csi-node-driver-zctz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibe71158417e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.841 [INFO][5263] k8s.go 608: Cleaning up netns ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.841 [INFO][5263] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" iface="eth0" netns="" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.841 [INFO][5263] k8s.go 615: Releasing IP address(es) ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.841 [INFO][5263] utils.go 188: Calico CNI releasing IP address ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.862 [INFO][5269] ipam_plugin.go 411: Releasing address using handleID ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.862 [INFO][5269] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.862 [INFO][5269] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.872 [WARNING][5269] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.872 [INFO][5269] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" HandleID="k8s-pod-network.12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Workload="ci--3975.1.1--a--59f2e70dce-k8s-csi--node--driver--zctz4-eth0" Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.873 [INFO][5269] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:02.877400 containerd[1697]: 2024-07-02 09:07:02.875 [INFO][5263] k8s.go 621: Teardown processing complete. ContainerID="12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244" Jul 2 09:07:02.877400 containerd[1697]: time="2024-07-02T09:07:02.876790560Z" level=info msg="TearDown network for sandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\" successfully" Jul 2 09:07:02.882987 containerd[1697]: time="2024-07-02T09:07:02.882921773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:07:02.883112 containerd[1697]: time="2024-07-02T09:07:02.883042854Z" level=info msg="RemovePodSandbox \"12e9ea955c170db21692da46c26d8ca02731b625bf1938c77ce82035525c0244\" returns successfully" Jul 2 09:07:02.883612 containerd[1697]: time="2024-07-02T09:07:02.883555615Z" level=info msg="StopPodSandbox for \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\"" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.919 [WARNING][5287] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0", GenerateName:"calico-kube-controllers-75647d58f7-", Namespace:"calico-system", SelfLink:"", UID:"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75647d58f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996", Pod:"calico-kube-controllers-75647d58f7-5rvzc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0adb2903577", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.920 [INFO][5287] k8s.go 608: Cleaning up netns ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.920 [INFO][5287] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" iface="eth0" netns="" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.920 [INFO][5287] k8s.go 615: Releasing IP address(es) ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.920 [INFO][5287] utils.go 188: Calico CNI releasing IP address ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.939 [INFO][5293] ipam_plugin.go 411: Releasing address using handleID ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.940 [INFO][5293] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.940 [INFO][5293] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.949 [WARNING][5293] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.949 [INFO][5293] ipam_plugin.go 439: Releasing address using workloadID ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.950 [INFO][5293] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:02.954020 containerd[1697]: 2024-07-02 09:07:02.952 [INFO][5287] k8s.go 621: Teardown processing complete. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:02.954020 containerd[1697]: time="2024-07-02T09:07:02.953996895Z" level=info msg="TearDown network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\" successfully" Jul 2 09:07:02.954020 containerd[1697]: time="2024-07-02T09:07:02.954023015Z" level=info msg="StopPodSandbox for \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\" returns successfully" Jul 2 09:07:02.956182 containerd[1697]: time="2024-07-02T09:07:02.954796376Z" level=info msg="RemovePodSandbox for \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\"" Jul 2 09:07:02.956182 containerd[1697]: time="2024-07-02T09:07:02.954830336Z" level=info msg="Forcibly stopping sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\"" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:02.990 [WARNING][5313] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0", GenerateName:"calico-kube-controllers-75647d58f7-", Namespace:"calico-system", SelfLink:"", UID:"105f3c79-8c9f-4f4b-b6cd-24afabef9d5e", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75647d58f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"b425a2156e8bba2bcbc3e1059bb5f6bc1e372c5227166c025aa8f59d3ba51996", Pod:"calico-kube-controllers-75647d58f7-5rvzc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0adb2903577", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:02.991 [INFO][5313] k8s.go 608: Cleaning up netns ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:02.991 [INFO][5313] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" iface="eth0" netns="" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:02.991 [INFO][5313] k8s.go 615: Releasing IP address(es) ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:02.991 [INFO][5313] utils.go 188: Calico CNI releasing IP address ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:03.009 [INFO][5319] ipam_plugin.go 411: Releasing address using handleID ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:03.009 [INFO][5319] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:03.009 [INFO][5319] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:03.018 [WARNING][5319] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:03.018 [INFO][5319] ipam_plugin.go 439: Releasing address using workloadID ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" HandleID="k8s-pod-network.249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--kube--controllers--75647d58f7--5rvzc-eth0" Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:03.020 [INFO][5319] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:03.023256 containerd[1697]: 2024-07-02 09:07:03.021 [INFO][5313] k8s.go 621: Teardown processing complete. ContainerID="249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05" Jul 2 09:07:03.023778 containerd[1697]: time="2024-07-02T09:07:03.023295612Z" level=info msg="TearDown network for sandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\" successfully" Jul 2 09:07:03.029816 containerd[1697]: time="2024-07-02T09:07:03.029756346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:07:03.030089 containerd[1697]: time="2024-07-02T09:07:03.029834267Z" level=info msg="RemovePodSandbox \"249ed8b30cda5225e076da1e87153aa0571e45b5ea901b0e028c3b3b630b3d05\" returns successfully" Jul 2 09:07:12.862194 kubelet[3236]: I0702 09:07:12.862140 3236 topology_manager.go:215] "Topology Admit Handler" podUID="8aed514d-ab59-4d3a-ba17-60a792518157" podNamespace="calico-apiserver" podName="calico-apiserver-cbcb9659c-xcrd7" Jul 2 09:07:12.869793 systemd[1]: Created slice kubepods-besteffort-pod8aed514d_ab59_4d3a_ba17_60a792518157.slice - libcontainer container kubepods-besteffort-pod8aed514d_ab59_4d3a_ba17_60a792518157.slice. Jul 2 09:07:12.896450 kubelet[3236]: I0702 09:07:12.896396 3236 topology_manager.go:215] "Topology Admit Handler" podUID="9bcc1d26-f630-4c8f-b56d-639d080bb588" podNamespace="calico-apiserver" podName="calico-apiserver-cbcb9659c-gl7mv" Jul 2 09:07:12.902791 systemd[1]: Created slice kubepods-besteffort-pod9bcc1d26_f630_4c8f_b56d_639d080bb588.slice - libcontainer container kubepods-besteffort-pod9bcc1d26_f630_4c8f_b56d_639d080bb588.slice. Jul 2 09:07:12.931971 kubelet[3236]: I0702 09:07:12.931925 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6nps\" (UniqueName: \"kubernetes.io/projected/8aed514d-ab59-4d3a-ba17-60a792518157-kube-api-access-f6nps\") pod \"calico-apiserver-cbcb9659c-xcrd7\" (UID: \"8aed514d-ab59-4d3a-ba17-60a792518157\") " pod="calico-apiserver/calico-apiserver-cbcb9659c-xcrd7" Jul 2 09:07:12.931971 kubelet[3236]: I0702 09:07:12.931980 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bcc1d26-f630-4c8f-b56d-639d080bb588-calico-apiserver-certs\") pod \"calico-apiserver-cbcb9659c-gl7mv\" (UID: \"9bcc1d26-f630-4c8f-b56d-639d080bb588\") " pod="calico-apiserver/calico-apiserver-cbcb9659c-gl7mv" Jul 2 09:07:12.932154 kubelet[3236]: I0702 09:07:12.932008 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhzv4\" (UniqueName: \"kubernetes.io/projected/9bcc1d26-f630-4c8f-b56d-639d080bb588-kube-api-access-nhzv4\") pod \"calico-apiserver-cbcb9659c-gl7mv\" (UID: \"9bcc1d26-f630-4c8f-b56d-639d080bb588\") " pod="calico-apiserver/calico-apiserver-cbcb9659c-gl7mv" Jul 2 09:07:12.932154 kubelet[3236]: I0702 09:07:12.932033 3236 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8aed514d-ab59-4d3a-ba17-60a792518157-calico-apiserver-certs\") pod \"calico-apiserver-cbcb9659c-xcrd7\" (UID: \"8aed514d-ab59-4d3a-ba17-60a792518157\") " pod="calico-apiserver/calico-apiserver-cbcb9659c-xcrd7" Jul 2 09:07:13.032437 kubelet[3236]: E0702 09:07:13.032387 3236 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:07:13.032707 kubelet[3236]: E0702 09:07:13.032459 3236 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8aed514d-ab59-4d3a-ba17-60a792518157-calico-apiserver-certs podName:8aed514d-ab59-4d3a-ba17-60a792518157 nodeName:}" failed. No retries permitted until 2024-07-02 09:07:13.532440705 +0000 UTC m=+71.248026848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8aed514d-ab59-4d3a-ba17-60a792518157-calico-apiserver-certs") pod "calico-apiserver-cbcb9659c-xcrd7" (UID: "8aed514d-ab59-4d3a-ba17-60a792518157") : secret "calico-apiserver-certs" not found Jul 2 09:07:13.033495 kubelet[3236]: E0702 09:07:13.032766 3236 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:07:13.033495 kubelet[3236]: E0702 09:07:13.032881 3236 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcc1d26-f630-4c8f-b56d-639d080bb588-calico-apiserver-certs podName:9bcc1d26-f630-4c8f-b56d-639d080bb588 nodeName:}" failed. No retries permitted until 2024-07-02 09:07:13.532869226 +0000 UTC m=+71.248455329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9bcc1d26-f630-4c8f-b56d-639d080bb588-calico-apiserver-certs") pod "calico-apiserver-cbcb9659c-gl7mv" (UID: "9bcc1d26-f630-4c8f-b56d-639d080bb588") : secret "calico-apiserver-certs" not found Jul 2 09:07:13.536475 kubelet[3236]: E0702 09:07:13.535936 3236 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:07:13.536475 kubelet[3236]: E0702 09:07:13.536003 3236 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bcc1d26-f630-4c8f-b56d-639d080bb588-calico-apiserver-certs podName:9bcc1d26-f630-4c8f-b56d-639d080bb588 nodeName:}" failed. No retries permitted until 2024-07-02 09:07:14.535986692 +0000 UTC m=+72.251572835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9bcc1d26-f630-4c8f-b56d-639d080bb588-calico-apiserver-certs") pod "calico-apiserver-cbcb9659c-gl7mv" (UID: "9bcc1d26-f630-4c8f-b56d-639d080bb588") : secret "calico-apiserver-certs" not found Jul 2 09:07:13.536475 kubelet[3236]: E0702 09:07:13.536398 3236 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:07:13.536475 kubelet[3236]: E0702 09:07:13.536434 3236 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8aed514d-ab59-4d3a-ba17-60a792518157-calico-apiserver-certs podName:8aed514d-ab59-4d3a-ba17-60a792518157 nodeName:}" failed. No retries permitted until 2024-07-02 09:07:14.536423813 +0000 UTC m=+72.252009956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8aed514d-ab59-4d3a-ba17-60a792518157-calico-apiserver-certs") pod "calico-apiserver-cbcb9659c-xcrd7" (UID: "8aed514d-ab59-4d3a-ba17-60a792518157") : secret "calico-apiserver-certs" not found Jul 2 09:07:14.678378 containerd[1697]: time="2024-07-02T09:07:14.678231152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbcb9659c-xcrd7,Uid:8aed514d-ab59-4d3a-ba17-60a792518157,Namespace:calico-apiserver,Attempt:0,}" Jul 2 09:07:14.720513 containerd[1697]: time="2024-07-02T09:07:14.720064441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbcb9659c-gl7mv,Uid:9bcc1d26-f630-4c8f-b56d-639d080bb588,Namespace:calico-apiserver,Attempt:0,}" Jul 2 09:07:14.868763 systemd-networkd[1324]: cali5ef52de2409: Link UP Jul 2 09:07:14.869794 systemd-networkd[1324]: cali5ef52de2409: Gained carrier Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.756 [INFO][5362] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0 calico-apiserver-cbcb9659c- calico-apiserver 8aed514d-ab59-4d3a-ba17-60a792518157 847 0 2024-07-02 09:07:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cbcb9659c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-59f2e70dce calico-apiserver-cbcb9659c-xcrd7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ef52de2409 [] []}} ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.757 [INFO][5362] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.805 [INFO][5382] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" HandleID="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.821 [INFO][5382] ipam_plugin.go 264: Auto assigning IP ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" HandleID="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000261ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-59f2e70dce", "pod":"calico-apiserver-cbcb9659c-xcrd7", "timestamp":"2024-07-02 09:07:14.805779742 +0000 UTC"}, Hostname:"ci-3975.1.1-a-59f2e70dce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.822 [INFO][5382] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.822 [INFO][5382] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.822 [INFO][5382] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-59f2e70dce' Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.825 [INFO][5382] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.835 [INFO][5382] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.843 [INFO][5382] ipam.go 489: Trying affinity for 192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.845 [INFO][5382] ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.847 [INFO][5382] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.847 [INFO][5382] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.849 [INFO][5382] ipam.go 1685: Creating new handle: k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.855 [INFO][5382] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.860 [INFO][5382] ipam.go 1216: Successfully claimed IPs: [192.168.73.5/26] block=192.168.73.0/26 handle="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.860 [INFO][5382] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.5/26] handle="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.860 [INFO][5382] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:14.897277 containerd[1697]: 2024-07-02 09:07:14.860 [INFO][5382] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.73.5/26] IPv6=[] ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" HandleID="k8s-pod-network.9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" Jul 2 09:07:14.899347 containerd[1697]: 2024-07-02 09:07:14.861 [INFO][5362] k8s.go 386: Populated endpoint ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0", GenerateName:"calico-apiserver-cbcb9659c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8aed514d-ab59-4d3a-ba17-60a792518157", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbcb9659c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"", Pod:"calico-apiserver-cbcb9659c-xcrd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef52de2409", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:14.899347 containerd[1697]: 2024-07-02 09:07:14.862 [INFO][5362] k8s.go 387: Calico CNI using IPs: [192.168.73.5/32] ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" Jul 2 09:07:14.899347 containerd[1697]: 2024-07-02 09:07:14.862 [INFO][5362] dataplane_linux.go 68: Setting the host side veth name to cali5ef52de2409 ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" Jul 2 09:07:14.899347 containerd[1697]: 2024-07-02 09:07:14.867 [INFO][5362] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" Jul 2 09:07:14.899347 containerd[1697]: 2024-07-02 09:07:14.868 [INFO][5362] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0", GenerateName:"calico-apiserver-cbcb9659c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8aed514d-ab59-4d3a-ba17-60a792518157", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbcb9659c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d", Pod:"calico-apiserver-cbcb9659c-xcrd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef52de2409", MAC:"6e:1c:05:11:4e:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:14.899347 containerd[1697]: 2024-07-02 09:07:14.889 [INFO][5362] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-xcrd7" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--xcrd7-eth0" Jul 2 09:07:14.937444 containerd[1697]: time="2024-07-02T09:07:14.936754940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:07:14.937444 containerd[1697]: time="2024-07-02T09:07:14.936810420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:07:14.937444 containerd[1697]: time="2024-07-02T09:07:14.936830100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:07:14.937444 containerd[1697]: time="2024-07-02T09:07:14.936843780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:07:14.941708 systemd-networkd[1324]: caliab323296263: Link UP Jul 2 09:07:14.941903 systemd-networkd[1324]: caliab323296263: Gained carrier Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.797 [INFO][5376] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0 calico-apiserver-cbcb9659c- calico-apiserver 9bcc1d26-f630-4c8f-b56d-639d080bb588 852 0 2024-07-02 09:07:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cbcb9659c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-59f2e70dce calico-apiserver-cbcb9659c-gl7mv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab323296263 [] []}} ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.797 [INFO][5376] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.839 [INFO][5390] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" HandleID="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.856 [INFO][5390] ipam_plugin.go 264: Auto assigning IP ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" HandleID="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-59f2e70dce", "pod":"calico-apiserver-cbcb9659c-gl7mv", "timestamp":"2024-07-02 09:07:14.838574092 +0000 UTC"}, Hostname:"ci-3975.1.1-a-59f2e70dce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.856 [INFO][5390] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.860 [INFO][5390] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.860 [INFO][5390] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-59f2e70dce' Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.864 [INFO][5390] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.873 [INFO][5390] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.886 [INFO][5390] ipam.go 489: Trying affinity for 192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.894 [INFO][5390] ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.898 [INFO][5390] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.898 [INFO][5390] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.902 [INFO][5390] ipam.go 1685: Creating new handle: k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34 Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.912 [INFO][5390] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.928 [INFO][5390] ipam.go 1216: Successfully claimed IPs: [192.168.73.6/26] block=192.168.73.0/26 handle="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.928 [INFO][5390] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.6/26] handle="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" host="ci-3975.1.1-a-59f2e70dce" Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.928 [INFO][5390] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:07:14.962686 containerd[1697]: 2024-07-02 09:07:14.929 [INFO][5390] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.73.6/26] IPv6=[] ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" HandleID="k8s-pod-network.1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Workload="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" Jul 2 09:07:14.963336 containerd[1697]: 2024-07-02 09:07:14.935 [INFO][5376] k8s.go 386: Populated endpoint ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0", GenerateName:"calico-apiserver-cbcb9659c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bcc1d26-f630-4c8f-b56d-639d080bb588", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbcb9659c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"", Pod:"calico-apiserver-cbcb9659c-gl7mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab323296263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:14.963336 containerd[1697]: 2024-07-02 09:07:14.936 [INFO][5376] k8s.go 387: Calico CNI using IPs: [192.168.73.6/32] ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" Jul 2 09:07:14.963336 containerd[1697]: 2024-07-02 09:07:14.936 [INFO][5376] dataplane_linux.go 68: Setting the host side veth name to caliab323296263 ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" Jul 2 09:07:14.963336 containerd[1697]: 2024-07-02 09:07:14.941 [INFO][5376] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" Jul 2 09:07:14.963336 containerd[1697]: 2024-07-02 09:07:14.942 [INFO][5376] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0", GenerateName:"calico-apiserver-cbcb9659c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bcc1d26-f630-4c8f-b56d-639d080bb588", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbcb9659c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-59f2e70dce", ContainerID:"1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34", Pod:"calico-apiserver-cbcb9659c-gl7mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab323296263", MAC:"0a:57:9a:b6:f4:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:07:14.963336 containerd[1697]: 2024-07-02 09:07:14.950 [INFO][5376] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34" Namespace="calico-apiserver" Pod="calico-apiserver-cbcb9659c-gl7mv" WorkloadEndpoint="ci--3975.1.1--a--59f2e70dce-k8s-calico--apiserver--cbcb9659c--gl7mv-eth0" Jul 2 09:07:14.982616 systemd[1]: Started cri-containerd-9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d.scope - libcontainer container 9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d. Jul 2 09:07:14.994773 containerd[1697]: time="2024-07-02T09:07:14.994675943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:07:14.996140 containerd[1697]: time="2024-07-02T09:07:14.994832223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:07:14.996140 containerd[1697]: time="2024-07-02T09:07:14.995139264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:07:14.996140 containerd[1697]: time="2024-07-02T09:07:14.995163504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:07:15.014591 systemd[1]: Started cri-containerd-1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34.scope - libcontainer container 1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34. Jul 2 09:07:15.062589 containerd[1697]: time="2024-07-02T09:07:15.062538486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbcb9659c-gl7mv,Uid:9bcc1d26-f630-4c8f-b56d-639d080bb588,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34\"" Jul 2 09:07:15.065725 containerd[1697]: time="2024-07-02T09:07:15.065536453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbcb9659c-xcrd7,Uid:8aed514d-ab59-4d3a-ba17-60a792518157,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d\"" Jul 2 09:07:15.074184 containerd[1697]: time="2024-07-02T09:07:15.074125191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 09:07:16.624499 systemd-networkd[1324]: caliab323296263: Gained IPv6LL Jul 2 09:07:16.689120 systemd-networkd[1324]: cali5ef52de2409: Gained IPv6LL Jul 2 09:07:16.912548 containerd[1697]: time="2024-07-02T09:07:16.910070521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:07:16.917100 containerd[1697]: time="2024-07-02T09:07:16.917046256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 09:07:16.918173 containerd[1697]: time="2024-07-02T09:07:16.918142218Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:07:16.922582 containerd[1697]: time="2024-07-02T09:07:16.922517267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:07:16.923258 containerd[1697]: time="2024-07-02T09:07:16.923080828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.848905637s" Jul 2 09:07:16.923258 containerd[1697]: time="2024-07-02T09:07:16.923119628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 09:07:16.924758 containerd[1697]: time="2024-07-02T09:07:16.924489111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 09:07:16.926109 containerd[1697]: time="2024-07-02T09:07:16.926049755Z" level=info msg="CreateContainer within sandbox \"9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 09:07:16.963223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950682650.mount: Deactivated successfully. Jul 2 09:07:16.971913 containerd[1697]: time="2024-07-02T09:07:16.971637571Z" level=info msg="CreateContainer within sandbox \"9cf0271824604ec48aca32323bfde89bfe667e87fc2505b2eebfc79f301cba4d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22e3767c6e30427a41f6e08e8751896f4d26e2f6bd440edfa2c9517e0c34a54b\"" Jul 2 09:07:16.973844 containerd[1697]: time="2024-07-02T09:07:16.973804256Z" level=info msg="StartContainer for \"22e3767c6e30427a41f6e08e8751896f4d26e2f6bd440edfa2c9517e0c34a54b\"" Jul 2 09:07:17.023624 systemd[1]: Started cri-containerd-22e3767c6e30427a41f6e08e8751896f4d26e2f6bd440edfa2c9517e0c34a54b.scope - libcontainer container 22e3767c6e30427a41f6e08e8751896f4d26e2f6bd440edfa2c9517e0c34a54b. Jul 2 09:07:17.064762 containerd[1697]: time="2024-07-02T09:07:17.064708368Z" level=info msg="StartContainer for \"22e3767c6e30427a41f6e08e8751896f4d26e2f6bd440edfa2c9517e0c34a54b\" returns successfully" Jul 2 09:07:17.211495 containerd[1697]: time="2024-07-02T09:07:17.210745078Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:07:17.213546 containerd[1697]: time="2024-07-02T09:07:17.213505044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jul 2 09:07:17.216768 containerd[1697]: time="2024-07-02T09:07:17.216716771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 292.078338ms" Jul 2 09:07:17.216935 containerd[1697]: time="2024-07-02T09:07:17.216919171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 09:07:17.219646 containerd[1697]: time="2024-07-02T09:07:17.219495856Z" level=info msg="CreateContainer within sandbox \"1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 09:07:17.256879 containerd[1697]: time="2024-07-02T09:07:17.256743135Z" level=info msg="CreateContainer within sandbox \"1a36376f6aabc6d179148d23912ef525667482637f5c60be99f4daf33cbc4e34\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e5ee8ddb48d24e7e0d106baa472d94e811a83ee49c8779cf14179123e2a61b85\"" Jul 2 09:07:17.257704 containerd[1697]: time="2024-07-02T09:07:17.257327297Z" level=info msg="StartContainer for \"e5ee8ddb48d24e7e0d106baa472d94e811a83ee49c8779cf14179123e2a61b85\"" Jul 2 09:07:17.287550 systemd[1]: Started cri-containerd-e5ee8ddb48d24e7e0d106baa472d94e811a83ee49c8779cf14179123e2a61b85.scope - libcontainer container e5ee8ddb48d24e7e0d106baa472d94e811a83ee49c8779cf14179123e2a61b85. Jul 2 09:07:17.330530 containerd[1697]: time="2024-07-02T09:07:17.330477852Z" level=info msg="StartContainer for \"e5ee8ddb48d24e7e0d106baa472d94e811a83ee49c8779cf14179123e2a61b85\" returns successfully" Jul 2 09:07:17.713098 kubelet[3236]: I0702 09:07:17.712257 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cbcb9659c-xcrd7" podStartSLOduration=3.859803655 podStartE2EDuration="5.7122143s" podCreationTimestamp="2024-07-02 09:07:12 +0000 UTC" firstStartedPulling="2024-07-02 09:07:15.071249865 +0000 UTC m=+72.786836008" lastFinishedPulling="2024-07-02 09:07:16.92366051 +0000 UTC m=+74.639246653" observedRunningTime="2024-07-02 09:07:17.71204678 +0000 UTC m=+75.427632923" watchObservedRunningTime="2024-07-02 09:07:17.7122143 +0000 UTC m=+75.427800403" Jul 2 09:07:17.724384 kubelet[3236]: I0702 09:07:17.724328 3236 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cbcb9659c-gl7mv" podStartSLOduration=3.578297139 podStartE2EDuration="5.724283766s" podCreationTimestamp="2024-07-02 09:07:12 +0000 UTC" firstStartedPulling="2024-07-02 09:07:15.071278185 +0000 UTC m=+72.786864328" lastFinishedPulling="2024-07-02 09:07:17.217264812 +0000 UTC m=+74.932850955" observedRunningTime="2024-07-02 09:07:17.724026765 +0000 UTC m=+75.439612908" watchObservedRunningTime="2024-07-02 09:07:17.724283766 +0000 UTC m=+75.439869909" Jul 2 09:07:32.592418 systemd[1]: run-containerd-runc-k8s.io-99e1eaea86966a5f2f8476915ebd26d51ebba5106cf1b08d12ff8bff07d847aa-runc.Is3xoQ.mount: Deactivated successfully. Jul 2 09:07:40.170254 systemd[1]: run-containerd-runc-k8s.io-bca64991b945641fc3fc9716fcf94d36a4b030f1a669b5c3460c350762149946-runc.n7KoIW.mount: Deactivated successfully. Jul 2 09:07:58.528667 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:37500.service - OpenSSH per-connection server daemon (10.200.16.10:37500). Jul 2 09:07:58.937715 sshd[5699]: Accepted publickey for core from 10.200.16.10 port 37500 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:07:58.940475 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:07:58.945955 systemd-logind[1665]: New session 10 of user core. Jul 2 09:07:58.950525 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 09:07:59.314160 sshd[5699]: pam_unix(sshd:session): session closed for user core Jul 2 09:07:59.317807 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:37500.service: Deactivated successfully. Jul 2 09:07:59.321577 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 09:07:59.323338 systemd-logind[1665]: Session 10 logged out. Waiting for processes to exit. Jul 2 09:07:59.324657 systemd-logind[1665]: Removed session 10. Jul 2 09:08:04.391282 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:37502.service - OpenSSH per-connection server daemon (10.200.16.10:37502). Jul 2 09:08:04.815222 sshd[5740]: Accepted publickey for core from 10.200.16.10 port 37502 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:04.817066 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:04.821686 systemd-logind[1665]: New session 11 of user core. Jul 2 09:08:04.824573 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 09:08:05.186567 sshd[5740]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:05.189449 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:37502.service: Deactivated successfully. Jul 2 09:08:05.191875 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 09:08:05.192967 systemd-logind[1665]: Session 11 logged out. Waiting for processes to exit. Jul 2 09:08:05.194825 systemd-logind[1665]: Removed session 11. Jul 2 09:08:10.273662 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:58636.service - OpenSSH per-connection server daemon (10.200.16.10:58636). Jul 2 09:08:10.719602 sshd[5783]: Accepted publickey for core from 10.200.16.10 port 58636 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:10.721205 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:10.724986 systemd-logind[1665]: New session 12 of user core. Jul 2 09:08:10.733537 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 09:08:11.117733 sshd[5783]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:11.121401 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:58636.service: Deactivated successfully. Jul 2 09:08:11.124328 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 09:08:11.125028 systemd-logind[1665]: Session 12 logged out. Waiting for processes to exit. Jul 2 09:08:11.126015 systemd-logind[1665]: Removed session 12. Jul 2 09:08:11.198685 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:58644.service - OpenSSH per-connection server daemon (10.200.16.10:58644). Jul 2 09:08:11.611053 sshd[5797]: Accepted publickey for core from 10.200.16.10 port 58644 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:11.612393 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:11.616443 systemd-logind[1665]: New session 13 of user core. Jul 2 09:08:11.622537 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 09:08:12.022828 sshd[5797]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:12.026756 systemd-logind[1665]: Session 13 logged out. Waiting for processes to exit. Jul 2 09:08:12.026762 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 09:08:12.028312 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:58644.service: Deactivated successfully. Jul 2 09:08:12.032653 systemd-logind[1665]: Removed session 13. Jul 2 09:08:12.099179 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:58660.service - OpenSSH per-connection server daemon (10.200.16.10:58660). Jul 2 09:08:12.517134 sshd[5808]: Accepted publickey for core from 10.200.16.10 port 58660 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:12.518132 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:12.523720 systemd-logind[1665]: New session 14 of user core. Jul 2 09:08:12.527744 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:08:12.894857 sshd[5808]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:12.898064 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:58660.service: Deactivated successfully. Jul 2 09:08:12.900603 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:08:12.903090 systemd-logind[1665]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:08:12.904348 systemd-logind[1665]: Removed session 14. Jul 2 09:08:17.981632 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:58674.service - OpenSSH per-connection server daemon (10.200.16.10:58674). Jul 2 09:08:18.433848 sshd[5845]: Accepted publickey for core from 10.200.16.10 port 58674 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:18.435292 sshd[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:18.439653 systemd-logind[1665]: New session 15 of user core. Jul 2 09:08:18.446571 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:08:18.865077 sshd[5845]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:18.867819 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:58674.service: Deactivated successfully. Jul 2 09:08:18.870152 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:08:18.872127 systemd-logind[1665]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:08:18.874065 systemd-logind[1665]: Removed session 15. Jul 2 09:08:23.943731 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:54800.service - OpenSSH per-connection server daemon (10.200.16.10:54800). Jul 2 09:08:24.357583 sshd[5859]: Accepted publickey for core from 10.200.16.10 port 54800 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:24.359110 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:24.363719 systemd-logind[1665]: New session 16 of user core. Jul 2 09:08:24.369538 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:08:24.721610 sshd[5859]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:24.724673 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:54800.service: Deactivated successfully. Jul 2 09:08:24.726863 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:08:24.728905 systemd-logind[1665]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:08:24.730305 systemd-logind[1665]: Removed session 16. Jul 2 09:08:29.797145 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:38690.service - OpenSSH per-connection server daemon (10.200.16.10:38690). Jul 2 09:08:30.210593 sshd[5877]: Accepted publickey for core from 10.200.16.10 port 38690 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:30.212083 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:30.216852 systemd-logind[1665]: New session 17 of user core. Jul 2 09:08:30.221582 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:08:30.575613 sshd[5877]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:30.579367 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:38690.service: Deactivated successfully. Jul 2 09:08:30.582312 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:08:30.584010 systemd-logind[1665]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:08:30.585610 systemd-logind[1665]: Removed session 17. Jul 2 09:08:35.658177 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:38698.service - OpenSSH per-connection server daemon (10.200.16.10:38698). Jul 2 09:08:36.101708 sshd[5912]: Accepted publickey for core from 10.200.16.10 port 38698 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:36.103668 sshd[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:36.108418 systemd-logind[1665]: New session 18 of user core. Jul 2 09:08:36.113740 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:08:36.505572 sshd[5912]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:36.509494 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:38698.service: Deactivated successfully. Jul 2 09:08:36.511299 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:08:36.513312 systemd-logind[1665]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:08:36.514739 systemd-logind[1665]: Removed session 18. Jul 2 09:08:36.591678 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:38708.service - OpenSSH per-connection server daemon (10.200.16.10:38708). Jul 2 09:08:37.048466 sshd[5925]: Accepted publickey for core from 10.200.16.10 port 38708 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:37.049995 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:37.054550 systemd-logind[1665]: New session 19 of user core. Jul 2 09:08:37.065553 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:08:37.535061 sshd[5925]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:37.538231 systemd-logind[1665]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:08:37.538546 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:38708.service: Deactivated successfully. Jul 2 09:08:37.540312 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:08:37.542869 systemd-logind[1665]: Removed session 19. Jul 2 09:08:37.616647 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:38720.service - OpenSSH per-connection server daemon (10.200.16.10:38720). Jul 2 09:08:38.027379 sshd[5941]: Accepted publickey for core from 10.200.16.10 port 38720 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:38.028754 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:38.033377 systemd-logind[1665]: New session 20 of user core. Jul 2 09:08:38.042547 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:08:39.914245 sshd[5941]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:39.917041 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:38720.service: Deactivated successfully. Jul 2 09:08:39.920010 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:08:39.922507 systemd-logind[1665]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:08:39.924432 systemd-logind[1665]: Removed session 20. Jul 2 09:08:39.996262 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:41000.service - OpenSSH per-connection server daemon (10.200.16.10:41000). Jul 2 09:08:40.442978 sshd[5962]: Accepted publickey for core from 10.200.16.10 port 41000 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:40.444490 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:40.448964 systemd-logind[1665]: New session 21 of user core. Jul 2 09:08:40.453558 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:08:40.952683 sshd[5962]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:40.956122 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:41000.service: Deactivated successfully. Jul 2 09:08:40.958724 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:08:40.960855 systemd-logind[1665]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:08:40.962296 systemd-logind[1665]: Removed session 21. Jul 2 09:08:41.032647 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:41006.service - OpenSSH per-connection server daemon (10.200.16.10:41006). Jul 2 09:08:41.438928 sshd[5995]: Accepted publickey for core from 10.200.16.10 port 41006 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:41.440372 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:41.444023 systemd-logind[1665]: New session 22 of user core. Jul 2 09:08:41.449526 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:08:41.798578 sshd[5995]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:41.802002 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:41006.service: Deactivated successfully. Jul 2 09:08:41.804226 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:08:41.806157 systemd-logind[1665]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:08:41.807163 systemd-logind[1665]: Removed session 22. Jul 2 09:08:46.882655 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:41018.service - OpenSSH per-connection server daemon (10.200.16.10:41018). Jul 2 09:08:47.322871 sshd[6008]: Accepted publickey for core from 10.200.16.10 port 41018 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:47.324186 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:47.327830 systemd-logind[1665]: New session 23 of user core. Jul 2 09:08:47.334513 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:08:47.720630 sshd[6008]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:47.724984 systemd-logind[1665]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:08:47.725180 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:41018.service: Deactivated successfully. Jul 2 09:08:47.728251 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:08:47.729334 systemd-logind[1665]: Removed session 23. Jul 2 09:08:52.806730 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:59540.service - OpenSSH per-connection server daemon (10.200.16.10:59540). Jul 2 09:08:53.252585 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 59540 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:53.254022 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:53.257896 systemd-logind[1665]: New session 24 of user core. Jul 2 09:08:53.265566 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:08:53.655589 sshd[6032]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:53.659074 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:59540.service: Deactivated successfully. Jul 2 09:08:53.662085 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:08:53.663553 systemd-logind[1665]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:08:53.664728 systemd-logind[1665]: Removed session 24. Jul 2 09:08:58.735675 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:40326.service - OpenSSH per-connection server daemon (10.200.16.10:40326). Jul 2 09:08:59.145791 sshd[6071]: Accepted publickey for core from 10.200.16.10 port 40326 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:08:59.147312 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:59.151469 systemd-logind[1665]: New session 25 of user core. Jul 2 09:08:59.158556 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:08:59.509611 sshd[6071]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:59.513194 systemd-logind[1665]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:08:59.513986 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:40326.service: Deactivated successfully. Jul 2 09:08:59.516997 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:08:59.518913 systemd-logind[1665]: Removed session 25. Jul 2 09:09:04.592667 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:40334.service - OpenSSH per-connection server daemon (10.200.16.10:40334). Jul 2 09:09:04.999709 sshd[6107]: Accepted publickey for core from 10.200.16.10 port 40334 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:09:05.001577 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:05.005569 systemd-logind[1665]: New session 26 of user core. Jul 2 09:09:05.013538 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:09:05.363915 sshd[6107]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:05.367404 systemd-logind[1665]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:09:05.368014 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:40334.service: Deactivated successfully. Jul 2 09:09:05.371022 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:09:05.372190 systemd-logind[1665]: Removed session 26. Jul 2 09:09:10.439522 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:48346.service - OpenSSH per-connection server daemon (10.200.16.10:48346). Jul 2 09:09:10.852788 sshd[6147]: Accepted publickey for core from 10.200.16.10 port 48346 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:09:10.854431 sshd[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:10.858463 systemd-logind[1665]: New session 27 of user core. Jul 2 09:09:10.867549 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 09:09:11.217648 sshd[6147]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:11.221670 systemd-logind[1665]: Session 27 logged out. Waiting for processes to exit. Jul 2 09:09:11.222472 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:48346.service: Deactivated successfully. Jul 2 09:09:11.225702 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 09:09:11.226635 systemd-logind[1665]: Removed session 27. Jul 2 09:09:16.299611 systemd[1]: Started sshd@25-10.200.20.37:22-10.200.16.10:48354.service - OpenSSH per-connection server daemon (10.200.16.10:48354). Jul 2 09:09:16.747711 sshd[6161]: Accepted publickey for core from 10.200.16.10 port 48354 ssh2: RSA SHA256:NcL9DaKpsftrKocpYJW4oGMW3luCC5d2WOCQSYm7a7o Jul 2 09:09:16.749064 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:16.755007 systemd-logind[1665]: New session 28 of user core. Jul 2 09:09:16.758555 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 09:09:17.148304 sshd[6161]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:17.152003 systemd-logind[1665]: Session 28 logged out. Waiting for processes to exit. Jul 2 09:09:17.152223 systemd[1]: sshd@25-10.200.20.37:22-10.200.16.10:48354.service: Deactivated successfully. Jul 2 09:09:17.153970 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 09:09:17.156082 systemd-logind[1665]: Removed session 28. Jul 2 09:09:30.588249 systemd[1]: cri-containerd-c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea.scope: Deactivated successfully. Jul 2 09:09:30.589581 systemd[1]: cri-containerd-c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea.scope: Consumed 5.506s CPU time. Jul 2 09:09:30.609076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea-rootfs.mount: Deactivated successfully. Jul 2 09:09:30.609880 containerd[1697]: time="2024-07-02T09:09:30.609673899Z" level=info msg="shim disconnected" id=c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea namespace=k8s.io Jul 2 09:09:30.609880 containerd[1697]: time="2024-07-02T09:09:30.609740700Z" level=warning msg="cleaning up after shim disconnected" id=c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea namespace=k8s.io Jul 2 09:09:30.609880 containerd[1697]: time="2024-07-02T09:09:30.609749260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:09:30.962843 kubelet[3236]: I0702 09:09:30.962164 3236 scope.go:117] "RemoveContainer" containerID="c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea" Jul 2 09:09:30.965372 containerd[1697]: time="2024-07-02T09:09:30.965087134Z" level=info msg="CreateContainer within sandbox \"de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 09:09:30.992685 containerd[1697]: time="2024-07-02T09:09:30.992595287Z" level=info msg="CreateContainer within sandbox \"de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd\"" Jul 2 09:09:30.993456 containerd[1697]: time="2024-07-02T09:09:30.993073128Z" level=info msg="StartContainer for \"cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd\"" Jul 2 09:09:31.024708 systemd[1]: Started cri-containerd-cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd.scope - libcontainer container cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd. Jul 2 09:09:31.052208 containerd[1697]: time="2024-07-02T09:09:31.052161280Z" level=info msg="StartContainer for \"cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd\" returns successfully" Jul 2 09:09:31.823966 systemd[1]: cri-containerd-2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030.scope: Deactivated successfully. Jul 2 09:09:31.824299 systemd[1]: cri-containerd-2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030.scope: Consumed 3.562s CPU time, 20.5M memory peak, 0B memory swap peak. Jul 2 09:09:31.848218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030-rootfs.mount: Deactivated successfully. Jul 2 09:09:31.849737 containerd[1697]: time="2024-07-02T09:09:31.849676934Z" level=info msg="shim disconnected" id=2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030 namespace=k8s.io Jul 2 09:09:31.850021 containerd[1697]: time="2024-07-02T09:09:31.849745294Z" level=warning msg="cleaning up after shim disconnected" id=2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030 namespace=k8s.io Jul 2 09:09:31.850021 containerd[1697]: time="2024-07-02T09:09:31.849753894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:09:31.965765 kubelet[3236]: I0702 09:09:31.965518 3236 scope.go:117] "RemoveContainer" containerID="2344622db33e34aa87930740e72f9157913dabddcaa6328a42758bb136f9a030" Jul 2 09:09:31.971763 containerd[1697]: time="2024-07-02T09:09:31.971610523Z" level=info msg="CreateContainer within sandbox \"b21de8fe1859f9534ff471aee76eb68ca510d86b4e89650b1718146fb34d27c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 09:09:32.002659 containerd[1697]: time="2024-07-02T09:09:32.002612161Z" level=info msg="CreateContainer within sandbox \"b21de8fe1859f9534ff471aee76eb68ca510d86b4e89650b1718146fb34d27c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8007378efc4c54612ed0658661645673db51c5d262ded2b0b849d8625443dc78\"" Jul 2 09:09:32.003275 containerd[1697]: time="2024-07-02T09:09:32.003234322Z" level=info msg="StartContainer for \"8007378efc4c54612ed0658661645673db51c5d262ded2b0b849d8625443dc78\"" Jul 2 09:09:32.037588 systemd[1]: Started cri-containerd-8007378efc4c54612ed0658661645673db51c5d262ded2b0b849d8625443dc78.scope - libcontainer container 8007378efc4c54612ed0658661645673db51c5d262ded2b0b849d8625443dc78. Jul 2 09:09:32.074818 containerd[1697]: time="2024-07-02T09:09:32.074758649Z" level=info msg="StartContainer for \"8007378efc4c54612ed0658661645673db51c5d262ded2b0b849d8625443dc78\" returns successfully" Jul 2 09:09:36.250809 kubelet[3236]: E0702 09:09:36.250700 3236 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-59f2e70dce?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 09:09:36.540313 systemd[1]: cri-containerd-5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d.scope: Deactivated successfully. Jul 2 09:09:36.540635 systemd[1]: cri-containerd-5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d.scope: Consumed 3.155s CPU time, 15.7M memory peak, 0B memory swap peak. Jul 2 09:09:36.568497 containerd[1697]: time="2024-07-02T09:09:36.567064525Z" level=info msg="shim disconnected" id=5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d namespace=k8s.io Jul 2 09:09:36.568497 containerd[1697]: time="2024-07-02T09:09:36.567134125Z" level=warning msg="cleaning up after shim disconnected" id=5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d namespace=k8s.io Jul 2 09:09:36.568497 containerd[1697]: time="2024-07-02T09:09:36.567143325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:09:36.567252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d-rootfs.mount: Deactivated successfully. Jul 2 09:09:36.723834 kubelet[3236]: E0702 09:09:36.723793 3236 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.37:43212->10.200.20.31:2379: read: connection timed out" Jul 2 09:09:36.980104 kubelet[3236]: I0702 09:09:36.979931 3236 scope.go:117] "RemoveContainer" containerID="5a884bc9641bc8765da9fde99ec7be4680ef8dbfaa5484b46fcacf7a12720f8d" Jul 2 09:09:36.982912 containerd[1697]: time="2024-07-02T09:09:36.982856005Z" level=info msg="CreateContainer within sandbox \"872cffa69672b9aa25594fb65f603e848db9250e3d12025ec3b1fb353a98e494\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 09:09:37.019756 containerd[1697]: time="2024-07-02T09:09:37.019693380Z" level=info msg="CreateContainer within sandbox \"872cffa69672b9aa25594fb65f603e848db9250e3d12025ec3b1fb353a98e494\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"73a3648fffce56d6104593798928a38e9ec7f243a401147824fab6c73a31e5f8\"" Jul 2 09:09:37.020517 containerd[1697]: time="2024-07-02T09:09:37.020490021Z" level=info msg="StartContainer for \"73a3648fffce56d6104593798928a38e9ec7f243a401147824fab6c73a31e5f8\"" Jul 2 09:09:37.050582 systemd[1]: Started cri-containerd-73a3648fffce56d6104593798928a38e9ec7f243a401147824fab6c73a31e5f8.scope - libcontainer container 73a3648fffce56d6104593798928a38e9ec7f243a401147824fab6c73a31e5f8. Jul 2 09:09:37.084313 containerd[1697]: time="2024-07-02T09:09:37.084162717Z" level=info msg="StartContainer for \"73a3648fffce56d6104593798928a38e9ec7f243a401147824fab6c73a31e5f8\" returns successfully" Jul 2 09:09:39.861028 systemd[1]: cri-containerd-cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd.scope: Deactivated successfully. Jul 2 09:09:39.882732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd-rootfs.mount: Deactivated successfully. Jul 2 09:09:39.891653 containerd[1697]: time="2024-07-02T09:09:39.891566744Z" level=info msg="shim disconnected" id=cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd namespace=k8s.io Jul 2 09:09:39.891653 containerd[1697]: time="2024-07-02T09:09:39.891645144Z" level=warning msg="cleaning up after shim disconnected" id=cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd namespace=k8s.io Jul 2 09:09:39.891653 containerd[1697]: time="2024-07-02T09:09:39.891654384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:09:39.988067 kubelet[3236]: I0702 09:09:39.988032 3236 scope.go:117] "RemoveContainer" containerID="c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea" Jul 2 09:09:39.988466 kubelet[3236]: I0702 09:09:39.988386 3236 scope.go:117] "RemoveContainer" containerID="cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd" Jul 2 09:09:39.988707 kubelet[3236]: E0702 09:09:39.988660 3236 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-2tfkl_tigera-operator(74933842-2f42-4cd5-a700-8133b9c74a82)\"" pod="tigera-operator/tigera-operator-76c4974c85-2tfkl" podUID="74933842-2f42-4cd5-a700-8133b9c74a82" Jul 2 09:09:39.990225 containerd[1697]: time="2024-07-02T09:09:39.989960372Z" level=info msg="RemoveContainer for \"c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea\"" Jul 2 09:09:39.997404 containerd[1697]: time="2024-07-02T09:09:39.997342543Z" level=info msg="RemoveContainer for \"c6219b307394f87bfa1359e86945f676f01bd76e24942e5260b4a52fead02fea\" returns successfully" Jul 2 09:09:41.393583 kubelet[3236]: I0702 09:09:41.393529 3236 status_manager.go:853] "Failed to get status for pod" podUID="74933842-2f42-4cd5-a700-8133b9c74a82" pod="tigera-operator/tigera-operator-76c4974c85-2tfkl" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.37:43140->10.200.20.31:2379: read: connection timed out" Jul 2 09:09:46.724323 kubelet[3236]: E0702 09:09:46.724184 3236 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-59f2e70dce?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 09:09:53.192717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.193092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.210494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.211077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.226961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.227283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.243332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.243656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.266139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.266462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.282285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.282592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.290387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.306034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.306377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.321874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.337384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.337670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.353097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.353410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.368596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.368882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.384225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.384567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.407884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.408210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.423971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.424304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.440076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.440479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.456370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.456685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.496980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.497185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.497295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.505672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.514098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.514397 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.531549 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.531889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.563642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.563978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.564108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.580664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.581159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.596573 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.596916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.612252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.612565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.628451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.628830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.643952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.644260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.660021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.660324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.675739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.676095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.691974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.692502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.709712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.710024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.725999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.726501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.742970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.743396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.759183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.759523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.775490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.775896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.791174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.791541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.807209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.807529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.823527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.823922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.840484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.841102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.857432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.857828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.873870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.874217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.890744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.891132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.906984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.907322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.924418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.924819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.940721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.941257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.957797 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.958183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.975001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.975317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.992185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:53.992677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.015239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.015590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.031856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.032274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.048368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.048727 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.066216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.066647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.082947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.083459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.099412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.099885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.116037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.116552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.125462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.142394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.151329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.151711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.168037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.168377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.184892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.185278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.202388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.202734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.219075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.219562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.235447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.235836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.251694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.252014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.267849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.268193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.283697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.291846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.292145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.307790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.308188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.323990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.324533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.332829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.348153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.348638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.356644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.364542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.380988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.381267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.382517 kubelet[3236]: I0702 09:09:54.381801 3236 scope.go:117] "RemoveContainer" containerID="cf4d3cf35791dca94ee5d51619526d93ade6e93633c97497f2b15cb39e3800dd" Jul 2 09:09:54.387014 containerd[1697]: time="2024-07-02T09:09:54.386681501Z" level=info msg="CreateContainer within sandbox \"de941b62d6ef6eb7185996bfb337adc3ffe93b42b0fc34a21bdd974aed02b176\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jul 2 09:09:54.389628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.405461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.405947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.421754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.422208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.439417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.439769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.456079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.456522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.472915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.473202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.489689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.490027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.498389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.515212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.515600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.531794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.532227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.548589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.548961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.565272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.565601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.581959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.582337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.598720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.599151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.615472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.615766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.632727 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.633165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.657376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.657849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.657972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.673165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.673596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.689101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.689506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.705094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.705481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.721501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.721845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.729446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.745977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.746345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.762314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.762766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.777968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.778302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.794331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.794930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.802976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.819048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.819434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.835546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.836023 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.852271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.852759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.869449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.869867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.885924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.886274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.902531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.902880 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.919066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.919346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.936432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.945426 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.945765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.962199 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.962591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.979223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.979601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.997140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:54.997540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.005469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.022297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.022642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.038572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.038945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.047722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.064230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.064721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.080568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.081062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.089443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.106251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.106595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.122073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.122373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.138156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.138648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.154684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.163995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.164284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.183014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.183618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.208160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.208942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.233001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.233712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.259322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.259930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.281452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.282000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.300854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#36 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.301266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.318331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.318791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.337539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#29 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.338078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.356461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#42 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.356949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.374671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.375061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.392805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 09:09:55.393328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#2 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001